Compare commits
9 Commits
4cf601df84
...
53cd519fa3
| Author | SHA1 | Date | |
|---|---|---|---|
| 53cd519fa3 | |||
| 1681a246de | |||
| 0d860d4a4f | |||
| 67d7313691 | |||
| 03baf67e79 | |||
| e5c8fb3d48 | |||
| 4ac16cedc4 | |||
| 53cf49f158 | |||
| bbdd83f9f6 |
13
.editorconfig
Normal file
13
.editorconfig
Normal file
@@ -0,0 +1,13 @@
|
||||
root = true
|
||||
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
indent_size = 2
|
||||
indent_style = space
|
||||
max_line_length = 120
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
[**.{md,rst}]
|
||||
indent_size = 2
|
||||
max_line_length = 80
|
||||
21
.github/workflows/flake-check.yaml
vendored
Normal file
21
.github/workflows/flake-check.yaml
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
name: "Nix flake check"
|
||||
on:
|
||||
workflow_call:
|
||||
pull_request:
|
||||
push:
|
||||
jobs:
|
||||
tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: cachix/install-nix-action@v31
|
||||
with:
|
||||
nix_path: nixpkgs=channel:nixos-unstable
|
||||
- name: Check formatting with nixfmt
|
||||
run: nix run nixpkgs#nixfmt-rfc-style --check .
|
||||
- name: Lint with statix
|
||||
run: nix run nixpkgs#statix check
|
||||
- name: Find dead code with deadnix
|
||||
run: nix run nixpkgs#deadnix
|
||||
- name: Run flake check
|
||||
run: nix flake check --accept-flake-config
|
||||
1
.mirroring-test
Normal file
1
.mirroring-test
Normal file
@@ -0,0 +1 @@
|
||||
test-1770296859
|
||||
149
BOOKS_PAPERS_MIGRATION_PLAN.md
Normal file
149
BOOKS_PAPERS_MIGRATION_PLAN.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Migration Plan: Move books and papers to flat directory
|
||||
|
||||
## Current State
|
||||
- **Books location:** `/data/desk/home.h.doc/books`
|
||||
- **Papers location:** `/data/desk/home.h.doc/papers`
|
||||
- **Current syncthing path:** `~/doc/readings` → `/home/h/doc/readings`
|
||||
- **Zotero:** Currently active, will be kept during/after migration
|
||||
- **Future Papis:** Will use same files once consolidated
|
||||
|
||||
## Decision Summary
|
||||
- **Target path:** `/data/desk/home.h.doc/readings` (single flat directory)
|
||||
- **Organization:** Completely flat (no subdirectories) - use Papis/Zotero tags for categorization
|
||||
- **Zotero:** Keep active during/after migration
|
||||
- **Rebuild timing:** After files are moved (safer - syncthing won't sync while moving)
|
||||
|
||||
---
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Update syncthing config (andromache)
|
||||
**File:** `hosts/andromache/default.nix`
|
||||
|
||||
Change the syncthing folder path from:
|
||||
```nix
|
||||
path = "/home/h/doc/readings";
|
||||
```
|
||||
|
||||
To:
|
||||
```nix
|
||||
path = "/data/desk/home.h.doc/readings";
|
||||
```
|
||||
|
||||
### Step 2: Rebuild andromache
|
||||
```bash
|
||||
sudo nixos-rebuild switch --flake /home/h/nix
|
||||
```
|
||||
|
||||
This applies the new syncthing configuration.
|
||||
|
||||
### Step 3: Prepare target directory
|
||||
```bash
|
||||
# Create the target directory (in case it doesn't exist)
|
||||
mkdir -p /data/desk/home.h.doc/readings
|
||||
```
|
||||
|
||||
### Step 4: Move files (EXECUTE THIS MANUALLY)
|
||||
|
||||
Choose one method:
|
||||
|
||||
**Method A: Move (removes original directories)**
|
||||
```bash
|
||||
mv /data/desk/home.h.doc/books/* /data/desk/home.h.doc/readings/
|
||||
mv /data/desk/home.h.doc/papers/* /data/desk/home.h.doc/readings/
|
||||
rmdir /data/desk/home.h.doc/books /data/desk/home.h.doc/papers
|
||||
```
|
||||
|
||||
**Method B: Copy (keeps original directories as backup)**
|
||||
```bash
|
||||
cp -r /data/desk/home.h.doc/books/* /data/desk/home.h.doc/readings/
|
||||
cp -r /data/desk/home.h.doc/papers/* /data/desk/home.h.doc/readings/
|
||||
```
|
||||
|
||||
### Step 5: Configure Boox to sync new path
|
||||
|
||||
On your Boox device, update the Syncthing folder to sync:
|
||||
- Path: Choose where you want the files (e.g., `/sdcard/Books/readings` or `/sdcard/Documents/readings`)
|
||||
- Accept connection from andromache when prompted
|
||||
|
||||
---
|
||||
|
||||
## Post-Migration Verification
|
||||
|
||||
### 1. Verify syncthing on andromache
|
||||
- Open http://localhost:8384
|
||||
- Confirm `readings` folder points to `/data/desk/home.h.doc/readings`
|
||||
- Check that files are being synced to Boox
|
||||
|
||||
### 2. Verify Boox receives files
|
||||
- Check that files from new directory appear on Boox
|
||||
- Confirm `readings` folder is active on Boox
|
||||
|
||||
### 3. Verify Zotero
|
||||
- Ensure Zotero can still access files at new location
|
||||
- Check that tags/categorization still work
|
||||
- Verify PDFs open correctly from Zotero library
|
||||
|
||||
---
|
||||
|
||||
## Future Work: Papis Migration
|
||||
|
||||
When ready to migrate to Papis:
|
||||
|
||||
1. Install Papis: `nix-shell -p papis`
|
||||
2. Configure Papis to use: `/data/desk/home.h.doc/readings`
|
||||
3. Import from Zotero or start fresh
|
||||
4. Both Zotero and Papis can coexist during transition
|
||||
5. Gradually migrate to Papis, then retire Zotero
|
||||
|
||||
---
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If anything goes wrong:
|
||||
|
||||
### Option 1: Revert syncthing config
|
||||
```bash
|
||||
# In hosts/andromache/default.nix, change back to:
|
||||
path = "/home/h/doc/readings";
|
||||
|
||||
# Rebuild:
|
||||
sudo nixos-rebuild switch --flake /home/h/nix
|
||||
```
|
||||
|
||||
### Option 2: Restore original directories
|
||||
If Method A (move) was used:
|
||||
```bash
|
||||
mkdir -p /data/desk/home.h.doc/books /data/desk/home.h.doc/papers
|
||||
# You'll need to manually move files back from readings/
|
||||
```
|
||||
|
||||
If Method B (copy) was used:
|
||||
```bash
|
||||
# Original directories still exist as backups at:
|
||||
/data/desk/home.h.doc/books
|
||||
/data/desk/home.h.doc/papers
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Checklist
|
||||
|
||||
- [ ] Update syncthing config in andromache
|
||||
- [ ] Rebuild andromache
|
||||
- [ ] Create target directory
|
||||
- [ ] Move files (choose method: move or copy)
|
||||
- [ ] Configure Boox folder path
|
||||
- [ ] Verify syncthing sync
|
||||
- [ ] Verify Zotero access
|
||||
- [ ] (Future) Install and configure Papis
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- **File conflicts:** If books and papers have files with the same name, the moved file will overwrite (from `books/` processed first, then `papers/`). Consider checking beforehand.
|
||||
|
||||
- **Zotero database:** No changes needed - Zotero tracks files by absolute path, which won't change.
|
||||
|
||||
- **Boox folder naming:** The Boox folder name can be anything you want (doesn't have to be "readings"). Use something descriptive for your device like "E-reader" or "Boox".
|
||||
189
CI_HOOKS_SUMMARY.md
Normal file
189
CI_HOOKS_SUMMARY.md
Normal file
@@ -0,0 +1,189 @@
|
||||
# Declarative CI and Git Hooks - Summary
|
||||
|
||||
## What's New
|
||||
|
||||
### 1. GitHub Actions CI ✅
|
||||
`.github/workflows/flake-check.yaml`
|
||||
- Runs `nixfmt --check` on every push/PR
|
||||
- Runs `nix flake check`
|
||||
- Blocks merging if checks fail
|
||||
|
||||
### 2. Nix-Native Git Hooks ✅
|
||||
`modules/git-hooks/default.nix`
|
||||
- Hooks defined in `flake.nix` (pure Nix)
|
||||
- Install automatically on `nixos-rebuild switch`
|
||||
- Run on every git commit
|
||||
|
||||
## Usage
|
||||
|
||||
### Install Hooks (One-time per host)
|
||||
|
||||
```nix
|
||||
# Add to hosts/<hostname>/default.nix
|
||||
{
|
||||
imports = [
|
||||
# ... other modules
|
||||
../../modules/git-hooks
|
||||
];
|
||||
|
||||
services.git-hooks = {
|
||||
enable = true;
|
||||
# flake-path = /home/h/nix; # Optional, default
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Rebuild
|
||||
|
||||
```bash
|
||||
sudo nixos-rebuild switch --flake .#andromache
|
||||
|
||||
# Output:
|
||||
# 🪝 Installing git hooks...
|
||||
# ✅ Done
|
||||
```
|
||||
|
||||
### Now Hooks Work Automatically
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "changes" # Hooks run automatically
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.github/workflows/flake-check.yaml` | CI pipeline |
|
||||
| `modules/git-hooks/default.nix` | Auto-install module |
|
||||
| `flake.nix` | Hook definitions |
|
||||
| `.editorconfig` | Code style |
|
||||
|
||||
## Enable on Other Hosts
|
||||
|
||||
```nix
|
||||
# hosts/<hostname>/default.nix
|
||||
imports = [
|
||||
# ... existing modules
|
||||
../../modules/git-hooks # Add this
|
||||
];
|
||||
|
||||
services.git-hooks.enable = true;
|
||||
```
|
||||
|
||||
## Add More Hooks
|
||||
|
||||
Edit `flake.nix`:
|
||||
|
||||
```nix
|
||||
checks.${system}.pre-commit-check.hooks = {
|
||||
nixfmt-rfc-style.enable = true; # ✅ Already done
|
||||
statix.enable = true; # ✅ Already done
|
||||
deadnix.enable = true; # ✅ Already done
|
||||
};
|
||||
```
|
||||
|
||||
All Phase 1 hooks are now enabled!
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
# 1. Rebuild to install hooks
|
||||
sudo nixos-rebuild switch --flake .#andromache
|
||||
|
||||
# 2. Test hooks
|
||||
git commit -m "test"
|
||||
|
||||
# 3. Test CI locally
|
||||
nix run nixpkgs#nixfmt --check .
|
||||
nix flake check
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
- `CI_HOOKS_SUMMARY.md` - This file
|
||||
- `DRUPOL_INFRA_ANALYSIS.md` - Reference patterns
|
||||
- `AWESOME_NIX_PLAN.md` - Future improvements
|
||||
- `OPENCODE.md` - Tracking document
|
||||
|
||||
## Currently Enabled
|
||||
|
||||
| Host | Status | Config File |
|
||||
|------|--------|--------------|
|
||||
| andromache | ✅ Enabled | `hosts/andromache/default.nix` |
|
||||
| astyanax | ✅ Enabled | `hosts/astyanax/default.nix` |
|
||||
| hecuba | ✅ Enabled | `hosts/hecuba/default.nix` |
|
||||
| eetion | ✅ Enabled | `hosts/eetion/default.nix` |
|
||||
| vm | ✅ Enabled | `hosts/vm/default.nix` |
|
||||
|
||||
## Clean Slate Test (Astyanax)
|
||||
|
||||
```bash
|
||||
# 1. Remove existing git hooks
|
||||
rm -rf /home/h/nix/.git/hooks/*
|
||||
ls -la /home/h/nix/.git/hooks/
|
||||
|
||||
# 2. Rebuild astyanax (installs hooks)
|
||||
sudo nixos-rebuild switch --flake .#astyanax
|
||||
|
||||
# Expected output:
|
||||
# 🪝 Installing git hooks...
|
||||
# ✅ Done
|
||||
|
||||
# 3. Verify hooks were installed
|
||||
ls -la /home/h/nix/.git/hooks/
|
||||
|
||||
# 4. Test hooks work
|
||||
echo "broken { }" > /home/h/nix/test.nix
|
||||
git add test.nix
|
||||
git commit -m "test" # Should fail with nixfmt error
|
||||
|
||||
# 5. Clean up
|
||||
rm /home/h/nix/test.nix
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### High Priority
|
||||
- [x] Add statix hook (lint for antipatterns) ✅ Done
|
||||
- [x] Add deadnix hook (find dead code) ✅ Done
|
||||
- [x] Enable git-hooks on all hosts ✅ Done
|
||||
- [ ] Add CI caching (speed up builds)
|
||||
|
||||
### Medium Priority
|
||||
- [ ] Add automated flake.lock updates
|
||||
- [ ] Add per-host CI checks
|
||||
- [ ] Add nixos-rebuild tests in CI
|
||||
|
||||
## References
|
||||
|
||||
- [git-hooks.nix](https://github.com/cachix/git-hooks.nix)
|
||||
- [nixfmt-rfc-style](https://github.com/NixOS/nixfmt)
|
||||
- [drupol/infra analysis](DRUPOL_INFRA_ANALYSIS.md)
|
||||
- [awesome-nix plan](AWESOME_NIX_PLAN.md)
|
||||
- [OpenCode documentation](OPENCODE.md)
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```bash
|
||||
# Rebuild (installs hooks automatically)
|
||||
sudo nixos-rebuild switch --flake .#<host>
|
||||
|
||||
# Verify hooks
|
||||
ls -la /home/h/nix/.git/hooks/
|
||||
|
||||
# Test formatting
|
||||
nixfmt .
|
||||
|
||||
# Check CI status
|
||||
# https://github.com/hektor/nix/actions
|
||||
```
|
||||
|
||||
## Key Points
|
||||
|
||||
✅ **Fully declarative** - Hooks install on every rebuild
|
||||
✅ **No manual setup** - No `nix develop` needed
|
||||
✅ **No devShell** - Pure NixOS activation
|
||||
✅ **Reproducible** - Managed by flake.lock
|
||||
✅ **Host-aware** - Per-host configuration
|
||||
✅ **Idempotent** - Checks before installing
|
||||
70
CLOUD_BACKUP_PLAN.md
Normal file
70
CLOUD_BACKUP_PLAN.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# Cloud Host Backup Plan
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Current Setup
|
||||
- **astyanax** (local): `b2:lmd005` - single repo, all hosts mixed
|
||||
- **andromache** (cloud): manual backup via script to `b2:lmd005:desktop-arch`
|
||||
|
||||
### Recommended Setup
|
||||
|
||||
#### 1. Repository Isolation
|
||||
Each host gets its own restic repository in a separate subdirectory:
|
||||
|
||||
```
|
||||
b2:lmd005:astyanax/ # restic repo for astyanax
|
||||
b2:lmd005:andromache/ # restic repo for andromache
|
||||
b2:lmd005:<hostname>/ # restic repo for each host
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Cryptographic isolation (different restic keys per repo)
|
||||
- Can't accidentally prune/delete other hosts' backups
|
||||
- Easier to restore/manage individual hosts
|
||||
- Can use B2 lifecycle rules per subdirectory
|
||||
|
||||
#### 2. Credential Isolation
|
||||
Each host gets its own B2 Application Key restricted to its subdirectory:
|
||||
|
||||
```
|
||||
B2 Key for astyanax: access to `lmd005:astyanax/*`
|
||||
B2 Key for andromache: access to `lmd005:andromache/*`
|
||||
```
|
||||
|
||||
**Security benefits:**
|
||||
- If host is compromised, attacker only accesses that host's backups
|
||||
- Cannot delete/read other hosts' backups
|
||||
- Principle of least privilege
|
||||
|
||||
#### 3. Cloud Host Strategy (No B2 credentials on cloud hosts)
|
||||
For cloud hosts like andromache:
|
||||
|
||||
```
|
||||
andromache (cloud) --[SFTP]--> astyanax (local) --[B2]--> b2:lmd005:andromache/
|
||||
```
|
||||
|
||||
- **andromache**: SSH access only, no B2 credentials
|
||||
- **astyanax**: Pulls backups via SFTP from andromache, pushes to B2
|
||||
- **B2 credentials**: Only stored on trusted local machine (astyanax)
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### ✅ Phase 1: Update astyanax backup
|
||||
- Change repository from `b2:lmd005` to `b2:lmd005:astyanax/` ✅
|
||||
- Create new restic repo
|
||||
- Migrate old snapshots if needed
|
||||
- Update to use host-specific B2 key (when available)
|
||||
|
||||
### ✅ Phase 2: Implement cloud host backups
|
||||
- Use SFTP-based module to pull from andromache ✅
|
||||
- Store in `b2:lmd005:andromache/` ✅
|
||||
- No B2 credentials on andromache ✅
|
||||
- Daily automated backups ✅
|
||||
|
||||
### Phase 3: Cleanup old backups
|
||||
- Clean up old `desktop-arch` snapshots
|
||||
- Remove old mixed repo (once migration complete)
|
||||
|
||||
## Questions
|
||||
1. Do you want to migrate existing astyanax snapshots to the new subdirectory, or start fresh?
|
||||
2. Should astyanax have a master/admin B2 key to manage all backups, or just its own?
|
||||
217
DOCKER_UPDATE_PLAN.md
Normal file
217
DOCKER_UPDATE_PLAN.md
Normal file
@@ -0,0 +1,217 @@
|
||||
# Docker Container Update Automation Plan
|
||||
|
||||
## Current State
|
||||
- Hecuba (Hetzner cloud host) runs Docker containers
|
||||
- WUD (Watchtower) is already running as a docker container
|
||||
- No declarative docker configuration in NixOS
|
||||
- Manual container management currently
|
||||
|
||||
## Goals
|
||||
Automate docker container updates on hecuba with proper declarative management
|
||||
|
||||
## Evaluation: Update Approaches
|
||||
|
||||
### Option 1: WUD (Watchtower)
|
||||
**Pros:**
|
||||
- Already deployed and working
|
||||
- Simple, single-purpose tool
|
||||
- Good monitoring capabilities via web UI
|
||||
- Can schedule update windows
|
||||
- Supports multiple strategies (always, weekly, etc.)
|
||||
|
||||
**Cons:**
|
||||
- Not declarative
|
||||
- Requires manual docker-compose or container management
|
||||
- No NixOS integration
|
||||
|
||||
### Option 2: Watchtower (original)
|
||||
**Pros:**
|
||||
- More popular and battle-tested
|
||||
- Simpler configuration
|
||||
- Wide community support
|
||||
|
||||
**Cons:**
|
||||
- Same as WUD - not declarative
|
||||
|
||||
### Option 3: NixOS Virtualisation.OCI-Containers
|
||||
**Pros:**
|
||||
- Fully declarative
|
||||
- Reproducible builds
|
||||
- Integrated with NixOS system
|
||||
- Automatic rollback capability
|
||||
- Can be managed via colmena
|
||||
|
||||
**Cons:**
|
||||
- More complex setup
|
||||
- Learning curve for OCI containers syntax
|
||||
- Update automation still needs to be handled separately
|
||||
|
||||
### Option 4: NixOS + Auto-Update
|
||||
**Pros:**
|
||||
- Declarative containers
|
||||
- Automatic system updates can trigger container updates
|
||||
- Full NixOS ecosystem integration
|
||||
|
||||
**Cons:**
|
||||
- Most complex approach
|
||||
- Overkill for simple use case
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Inventory Current Setup
|
||||
- [ ] Document all existing docker containers on hecuba
|
||||
- [ ] Document current WUD configuration
|
||||
- [ ] Document update schedules and preferences
|
||||
- [ ] Identify containers that should NOT auto-update
|
||||
- [ ] Map container dependencies
|
||||
|
||||
### Phase 2: Choose Strategy
|
||||
- [ ] Evaluate trade-offs between WUD vs declarative approach
|
||||
- [ ] Decision: Hybrid approach (declarative + WUD) OR full NixOS
|
||||
|
||||
#### Option A: Hybrid (Recommended Short-term)
|
||||
- Keep WUD for automation
|
||||
- Add OCI containers to NixOS for declarative config
|
||||
- Gradually migrate containers one by one
|
||||
|
||||
#### Option B: Full NixOS
|
||||
- Replace WUD with declarative containers
|
||||
- Use systemd timers for update schedules
|
||||
- More complex but fully reproducible
|
||||
|
||||
### Phase 3: Implementation (Hybrid Approach)
|
||||
|
||||
#### Step 1: Create Docker Module
|
||||
Create `modules/docker/containers.nix`:
|
||||
```nix
|
||||
{ config, lib, ... }:
|
||||
{
|
||||
virtualisation.oci-containers = {
|
||||
backend = "docker";
|
||||
containers = {
|
||||
# Container definitions here
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 2: Define Containers
|
||||
- [ ] Add WUD container to declarative config
|
||||
- [ ] Add other existing containers to declarative config
|
||||
- [ ] Configure container restart policies
|
||||
- [ ] Set up container-specific networks if needed
|
||||
|
||||
#### Step 3: Persistent Storage
|
||||
- [ ] Document volumes for each container
|
||||
- [ ] Add volume management to NixOS config
|
||||
- [ ] Ensure backup processes cover container data
|
||||
|
||||
#### Step 4: WUD Configuration
|
||||
- [ ] Add WUD config to NixOS module
|
||||
- [ ] Configure watch intervals
|
||||
- [ ] Set up notifications
|
||||
- [ ] Configure containers to exclude from auto-update
|
||||
|
||||
#### Step 5: Deployment
|
||||
- [ ] Test configuration locally first
|
||||
- [ ] Deploy to hecuba via colmena
|
||||
- [ ] Monitor container restarts
|
||||
- [ ] Verify WUD still works
|
||||
|
||||
### Phase 4: Maintenance & Monitoring
|
||||
- [ ] Set up container health checks
|
||||
- [ ] Configure alerts for failed updates
|
||||
- [ ] Document rollback procedure
|
||||
- [ ] Schedule regular container audits
|
||||
|
||||
## Container Inventory Template
|
||||
|
||||
```
|
||||
Container Name:
|
||||
Purpose:
|
||||
Image:
|
||||
Exposed Ports:
|
||||
Volumes:
|
||||
Network:
|
||||
Auto-Update: yes/no
|
||||
Restart Policy:
|
||||
Notes:
|
||||
```
|
||||
|
||||
## Example NixOS OCI Container Definition
|
||||
|
||||
```nix
|
||||
# modules/docker/containers.nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
{
|
||||
virtualisation.oci-containers = {
|
||||
backend = "docker";
|
||||
containers = {
|
||||
wud = {
|
||||
image = "containrrr/watchtower:latest";
|
||||
ports = [ "8080:8080" ];
|
||||
volumes = [
|
||||
"/var/run/docker.sock:/var/run/docker.sock"
|
||||
];
|
||||
environment = {
|
||||
WATCHTOWER_CLEANUP = "true";
|
||||
WATCHTOWER_SCHEDULE = "0 2 * * *";
|
||||
};
|
||||
};
|
||||
# Add other containers here
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
1. **Document First**: Before changing anything, document current state
|
||||
2. **Test Locally**: Use colmena's local deployment if possible
|
||||
3. **Migrate One by One**: Move containers individually to minimize risk
|
||||
4. **Monitor Closely**: Watch logs after each migration
|
||||
5. **Keep Backups**: Ensure data is backed up before major changes
|
||||
|
||||
## WUD vs Watchtower Clarification
|
||||
|
||||
There are two different tools:
|
||||
- **Watchtower**: Original tool, more popular
|
||||
- **WUD**: Different implementation with web UI
|
||||
|
||||
Since you already have WUD running, we should:
|
||||
1. Document its current configuration
|
||||
2. Either keep it and make it declarative, OR
|
||||
3. Switch to Watchtower if it better fits your needs
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Immediate**: Document all current containers and their configs
|
||||
2. **Decision**: Choose between hybrid or full NixOS approach
|
||||
3. **Implementation**: Create docker containers module
|
||||
4. **Testing**: Deploy to hecuba and verify
|
||||
|
||||
## Questions to Answer
|
||||
|
||||
- Which containers are currently running?
|
||||
- How critical is uptime for each container?
|
||||
- Any containers that should NEVER auto-update?
|
||||
- Preferred update schedule (daily, weekly)?
|
||||
- How should update failures be handled (retry, notify, manual)?
|
||||
- Do you have backups of container data currently?
|
||||
|
||||
## Risk Considerations
|
||||
|
||||
- Auto-updates can break applications
|
||||
- Need to test updates before production (maybe staging)
|
||||
- Some containers have configuration changes between versions
|
||||
- Data loss risk if volumes are misconfigured
|
||||
- Network disruption during updates
|
||||
|
||||
## Monitoring Setup
|
||||
|
||||
Consider adding monitoring for:
|
||||
- Container health status
|
||||
- Update success/failure rates
|
||||
- Disk space usage
|
||||
- Resource consumption
|
||||
- Backup verification
|
||||
226
IMPLEMENTATION_PLAN.md
Normal file
226
IMPLEMENTATION_PLAN.md
Normal file
@@ -0,0 +1,226 @@
|
||||
# Implementation Plan - Nix Flake Improvements
|
||||
|
||||
## Overview
|
||||
|
||||
Consolidated plan from:
|
||||
- [AWESOME_NIX_PLAN.md](AWESOME_NIX_PLAN.md) - Awesome-nix integration
|
||||
- [DRUPOL_INFRA_ANALYSIS.md](DRUPOL_INFRA_ANALYSIS.md) - Reference patterns
|
||||
- [OPENCODE.md](OPENCODE.md) - Tracking document
|
||||
|
||||
## ✅ Completed
|
||||
|
||||
### Code Quality
|
||||
- ✅ GitHub Actions CI (`.github/workflows/flake-check.yaml`)
|
||||
- ✅ Nix-native git hooks (`modules/git-hooks/default.nix`)
|
||||
- ✅ nixfmt integration (runs on commit and CI)
|
||||
- ✅ .editorconfig (unified code style)
|
||||
|
||||
### Declarative Setup
|
||||
- ✅ Git hooks auto-install on `nixos-rebuild switch`
|
||||
- ✅ No devShell (fully NixOS activation-based)
|
||||
- ✅ Hooks enabled on andromache and astyanax
|
||||
|
||||
## 📋 Pending Implementation
|
||||
|
||||
### Phase 1: Enhanced Code Quality (Week 1)
|
||||
**Priority: HIGH** ✅ Complete
|
||||
|
||||
| # | Task | Effort | Impact | Details | Status |
|
||||
|---|-------|--------|---------|----------|--------|
|
||||
| 1.1 | Add statix hook | Low | High | Lint for Nix antipatterns | ✅ Done |
|
||||
| 1.2 | Add deadnix hook | Low | High | Find dead code in Nix files | ✅ Done |
|
||||
| 1.3 | Enable git-hooks on all hosts | Very Low | Medium | Add to hecuba, eetion, vm | ✅ Done |
|
||||
| 1.4 | Fix activation script | Low | High | Use `nix flake check` | ✅ Done |
|
||||
| 1.5 | Fix module syntax errors | Low | High | Correct brace closing | ✅ Done |
|
||||
|
||||
| # | Task | Effort | Impact | Details | Status |
|
||||
|---|-------|--------|---------|----------|--------|
|
||||
| 1.1 | Add statix hook | Low | High | Lint for Nix antipatterns | ✅ Done |
|
||||
| 1.2 | Add deadnix hook | Low | High | Find dead code in Nix files | ✅ Done |
|
||||
| 1.3 | Enable git-hooks on all hosts | Very Low | Medium | Add to hecuba, eetion, vm | ✅ Done |
|
||||
|
||||
**Implementation:**
|
||||
```nix
|
||||
# flake.nix
|
||||
checks.${system}.pre-commit-check.hooks = {
|
||||
nixfmt-rfc-style.enable = true; # ✅ Already done
|
||||
statix.enable = true; # Add this
|
||||
deadnix.enable = true; # Add this
|
||||
};
|
||||
```
|
||||
|
||||
### Phase 2: CI/CD Enhancements (Week 2)
|
||||
**Priority: HIGH**
|
||||
|
||||
| # | Task | Effort | Impact | Details |
|
||||
|---|-------|--------|---------|
|
||||
| 2.1 | Add CI caching | Medium | High | Speed up GitHub Actions builds |
|
||||
| 2.2 | Add automated flake.lock updates | Medium | Medium | Weekly scheduled updates |
|
||||
| 2.3 | Add per-host CI checks | Medium | Medium | Test specific NixOS configs in CI |
|
||||
|
||||
**2.1 CI Caching:**
|
||||
```yaml
|
||||
# .github/workflows/flake-check.yaml
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: /nix/store
|
||||
key: ${{ runner.os }}-nix-${{ hashFiles('**') }}
|
||||
```
|
||||
|
||||
**2.2 Automated Updates:**
|
||||
```yaml
|
||||
# .github/workflows/update-flake-lock.yaml
|
||||
name: "Auto update flake lock"
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 12 * * 0" # Weekly
|
||||
jobs:
|
||||
update:
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: cachix/install-nix-action@v31
|
||||
- run: nix flake update
|
||||
- uses: peter-evans/create-pull-request@v6
|
||||
```
|
||||
|
||||
### Phase 3: Developer Experience (Week 3)
|
||||
**Priority: MEDIUM**
|
||||
|
||||
| # | Task | Effort | Impact | Details |
|
||||
|---|-------|--------|---------|
|
||||
| 3.1 | Add nil/nixd LSP | Low | Medium | Autocompletion, error highlighting |
|
||||
| 3.2 | Add nix-index + comma | Low | Medium | Run any binary without `nix run` |
|
||||
| 3.3 | Add nh | Low | Medium | Better CLI output for nix commands |
|
||||
|
||||
**3.1 LSP Setup:**
|
||||
```nix
|
||||
# Add to nvim config or home-manager
|
||||
services.lsp.servers.nil = {
|
||||
enable = true;
|
||||
package = pkgs.nil;
|
||||
};
|
||||
```
|
||||
|
||||
**3.2 nix-index:**
|
||||
```bash
|
||||
nix-index
|
||||
git clone https://github.com/nix-community/nix-index
|
||||
```
|
||||
|
||||
### Phase 4: Utility Tools (Week 4)
|
||||
**Priority: LOW**
|
||||
|
||||
| # | Task | Effort | Impact | Details |
|
||||
|---|-------|--------|---------|
|
||||
| 4.1 | Add nix-tree | Very Low | Low | Browse dependency graph |
|
||||
| 4.2 | Add nix-du | Very Low | Low | Visualize GC roots |
|
||||
| 4.3 | Add nix-init | Low | Low | Generate packages from URLs |
|
||||
| 4.4 | Add nix-update | Low | Low | Update package versions |
|
||||
|
||||
### Phase 5: Structural Improvements (Future)
|
||||
**Priority: LOW-MEDIUM**
|
||||
|
||||
| # | Task | Effort | Impact | Details |
|
||||
|---|-------|--------|---------|
|
||||
| 5.1 | Migrate to flake-parts | Medium-High | High | Automatic module discovery |
|
||||
| 5.2 | Add treefmt-nix | Medium | Medium | Unified project formatting |
|
||||
| 5.3 | Add nix-direnv | Low | Medium | Auto-load dev environments |
|
||||
|
||||
## 📊 Implementation Status
|
||||
|
||||
### Code Quality
|
||||
| Feature | Status | File |
|
||||
|---------|--------|-------|
|
||||
| CI (GitHub Actions) | ✅ Done | `.github/workflows/flake-check.yaml` |
|
||||
| Git hooks (Nix-native) | ✅ Done | `modules/git-hooks/default.nix` |
|
||||
| nixfmt | ✅ Done | Enabled in hooks |
|
||||
| statix | ✅ Done | Phase 1.1 complete |
|
||||
| deadnix | ✅ Done | Phase 1.2 complete |
|
||||
| All hosts enabled | ✅ Done | Phase 1.3 complete |
|
||||
| CI caching | ⏳ Pending | Phase 2.1 |
|
||||
| Auto flake updates | ⏳ Pending | Phase 2.2 |
|
||||
|
||||
### Hosts with Git Hooks
|
||||
| Host | Status | Config |
|
||||
|------|--------|--------|
|
||||
| andromache | ✅ Enabled | `hosts/andromache/default.nix` |
|
||||
| astyanax | ✅ Enabled | `hosts/astyanax/default.nix` |
|
||||
| hecuba | ✅ Enabled | `hosts/hecuba/default.nix` |
|
||||
| eetion | ✅ Enabled | `hosts/eetion/default.nix` |
|
||||
| vm | ✅ Enabled | `hosts/vm/default.nix` |
|
||||
|
||||
### Developer Tools
|
||||
| Tool | Status | Phase |
|
||||
|------|--------|--------|
|
||||
| nil/nixd | ⏳ Pending | 3.1 |
|
||||
| nix-index | ⏳ Pending | 3.2 |
|
||||
| nh | ⏳ Pending | 3.3 |
|
||||
| nix-tree | ⏳ Pending | 4.1 |
|
||||
| nix-du | ⏳ Pending | 4.2 |
|
||||
| nix-init | ⏳ Pending | 4.3 |
|
||||
| nix-update | ⏳ Pending | 4.4 |
|
||||
|
||||
### Structure
|
||||
| Feature | Status | Phase |
|
||||
|---------|--------|--------|
|
||||
| flake-parts | ⏳ Pending | 5.1 |
|
||||
| treefmt-nix | ⏳ Pending | 5.2 |
|
||||
| nix-direnv | ⏳ Pending | 5.3 |
|
||||
| .editorconfig | ✅ Done | Already added |
|
||||
|
||||
## 🎯 Quick Wins (Day 1)
|
||||
|
||||
If you want immediate value, start with:
|
||||
|
||||
### 1. Enable git-hooks on remaining hosts (5 minutes)
|
||||
```nix
|
||||
# Add to hosts/hecuba/default.nix, eetion/default.nix, vm/default.nix
|
||||
imports = [
|
||||
# ... existing modules
|
||||
../../modules/git-hooks
|
||||
];
|
||||
|
||||
services.git-hooks.enable = true;
|
||||
```
|
||||
|
||||
### 2. Add statix hook (10 minutes)
|
||||
```nix
|
||||
# Edit flake.nix
|
||||
checks.${system}.pre-commit-check.hooks = {
|
||||
nixfmt-rfc-style.enable = true;
|
||||
statix.enable = true; # Add this
|
||||
};
|
||||
```
|
||||
|
||||
### 3. Add deadnix hook (10 minutes)
|
||||
```nix
|
||||
# Edit flake.nix
|
||||
checks.${system}.pre-commit-check.hooks = {
|
||||
nixfmt-rfc-style.enable = true;
|
||||
statix.enable = true;
|
||||
deadnix.enable = true; # Add this
|
||||
};
|
||||
```
|
||||
|
||||
## 📚 References
|
||||
|
||||
- [CI_HOOKS_SUMMARY.md](CI_HOOKS_SUMMARY.md) - Current CI/hooks setup
|
||||
- [AWESOME_NIX_PLAN.md](AWESOME_NIX_PLAN.md) - Awesome-nix integration
|
||||
- [DRUPOL_INFRA_ANALYSIS.md](DRUPOL_INFRA_ANALYSIS.md) - Reference patterns
|
||||
- [OPENCODE.md](OPENCODE.md) - Original tracking
|
||||
|
||||
## 🚀 Implementation Order
|
||||
|
||||
**Recommended sequence:**
|
||||
1. **Phase 1** (Week 1) - Enhanced code quality
|
||||
2. **Phase 2** (Week 2) - CI/CD improvements
|
||||
3. **Phase 3** (Week 3) - Developer experience
|
||||
4. **Phase 4** (Week 4) - Utility tools
|
||||
5. **Phase 5** (Future) - Structural changes
|
||||
|
||||
## 🔄 Updates
|
||||
|
||||
As items are completed, update the status in this document and check off in:
|
||||
- [AWESOME_NIX_PLAN.md](AWESOME_NIX_PLAN.md)
|
||||
- [OPENCODE.md](OPENCODE.md)
|
||||
- [CI_HOOKS_SUMMARY.md](CI_HOOKS_SUMMARY.md)
|
||||
67
OPENCODE.md
Normal file
67
OPENCODE.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# OpenCode: Future Nix Flake Improvements
|
||||
|
||||
This document tracks potential improvements to the Nix flake configuration.
|
||||
|
||||
## 📋 Status Overview
|
||||
|
||||
| Category | Status |
|
||||
|---------|--------|
|
||||
| Code Quality | 🟡 In Progress |
|
||||
| CI/CD | ✅ Baseline Done |
|
||||
| Developer Experience | ⏸ Not Started |
|
||||
| Utilities | ⏸ Not Started |
|
||||
| Structure | ⏸ Not Started |
|
||||
|
||||
## ✅ Completed
|
||||
|
||||
### CI and Git Hooks
|
||||
- ✅ **GitHub Actions CI** - `.github/workflows/flake-check.yaml`
|
||||
- ✅ **Nix-native git hooks** - `modules/git-hooks/default.nix`
|
||||
- ✅ **Declarative hook installation** - Auto-installs on rebuild
|
||||
- ✅ **nixfmt integration** - Runs on commit and CI
|
||||
- ✅ **statix integration** - Lints for Nix antipatterns
|
||||
- ✅ **deadnix integration** - Finds dead code
|
||||
- ✅ **.editorconfig** - Unified code style
|
||||
- ✅ **Git hooks on all hosts** - Enabled on andromache, astyanax, hecuba, eetion, vm
|
||||
|
||||
### Deduplication
|
||||
- ✅ **Created `mkNixOS` helper** - Centralized system configuration
|
||||
|
||||
## 📋 Pending Improvements
|
||||
|
||||
See [IMPLEMENTATION_PLAN.md](IMPLEMENTATION_PLAN.md) for detailed implementation phases.
|
||||
|
||||
### Quick Reference
|
||||
| Priority | Task | Phase |
|
||||
|----------|-------|--------|
|
||||
| HIGH | Add statix hook | 1.1 |
|
||||
| HIGH | Add deadnix hook | 1.2 |
|
||||
| HIGH | Enable git-hooks on all hosts | 1.3 |
|
||||
| HIGH | Add CI caching | 2.1 |
|
||||
| MEDIUM | Add automated flake.lock updates | 2.2 |
|
||||
| MEDIUM | Add nil/nixd LSP | 3.1 |
|
||||
| MEDIUM | Add nix-index + comma | 3.2 |
|
||||
| MEDIUM | Add nh | 3.3 |
|
||||
| LOW | Add utility tools (nix-tree, etc.) | 4.x |
|
||||
| LOW | Migrate to flake-parts | 5.1 |
|
||||
|
||||
## 🎯 Next Steps
|
||||
|
||||
1. Review [IMPLEMENTATION_PLAN.md](IMPLEMENTATION_PLAN.md) for complete roadmap
|
||||
2. Start with Phase 1 (Enhanced Code Quality)
|
||||
3. Update this document as items are completed
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [IMPLEMENTATION_PLAN.md](IMPLEMENTATION_PLAN.md) | ✅ **Main plan** - Consolidated roadmap |
|
||||
| [CI_HOOKS_SUMMARY.md](CI_HOOKS_SUMMARY.md) | Current CI/hooks setup |
|
||||
| [AWESOME_NIX_PLAN.md](AWESOME_NIX_PLAN.md) | Awesome-nix integration details |
|
||||
| [DRUPOL_INFRA_ANALYSIS.md](DRUPOL_INFRA_ANALYSIS.md) | Reference patterns |
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [awesome-nix](https://github.com/nix-community/awesome-nix)
|
||||
- [git-hooks.nix](https://github.com/cachix/git-hooks.nix)
|
||||
- [drupol/infra](https://github.com/drupol/infra)
|
||||
130
SIMPLE_HOOKS.md
Normal file
130
SIMPLE_HOOKS.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Git Hooks - Simple Declarative Setup
|
||||
|
||||
## Concept
|
||||
|
||||
Hooks are defined in Nix (`flake.nix`) and installed by running `nix flake check` once.
|
||||
|
||||
**No systemd services, no activation scripts, no complexity.**
|
||||
|
||||
## How It Works
|
||||
|
||||
### 1. Hooks Defined in Nix
|
||||
`flake.nix`:
|
||||
```nix
|
||||
checks.${system}.pre-commit-check = git-hooks.lib.${system}.run {
|
||||
src = ./.;
|
||||
hooks = {
|
||||
nixfmt.enable = true;
|
||||
statix.enable = true;
|
||||
deadnix.enable = true;
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### 2. Installation
|
||||
Run once on each host:
|
||||
```bash
|
||||
nix flake check
|
||||
```
|
||||
|
||||
This installs the hooks and creates `.git/hooks/pre-commit`.
|
||||
|
||||
### 3. Automatic
|
||||
- ✅ Hooks run on every `git commit`
|
||||
- ✅ CI runs `nix flake check` automatically
|
||||
- ✅ Hooks checked on every push/PR
|
||||
|
||||
## Usage
|
||||
|
||||
### Install Hooks (One-Time Per Host)
|
||||
|
||||
```bash
|
||||
# From the flake directory
|
||||
nix flake check
|
||||
|
||||
# You should see hooks installing
|
||||
```
|
||||
|
||||
### Verify Installation
|
||||
|
||||
```bash
|
||||
ls -la .git/hooks/
|
||||
```
|
||||
|
||||
Should show `pre-commit` (and potentially other hooks).
|
||||
|
||||
### Test Hooks
|
||||
|
||||
```bash
|
||||
# Create a file with bad formatting
|
||||
echo "broken { }" > test.nix
|
||||
|
||||
# Try to commit (should fail)
|
||||
git add test.nix
|
||||
git commit -m "test"
|
||||
|
||||
# Clean up
|
||||
rm test.nix
|
||||
```
|
||||
|
||||
## What's Declarative
|
||||
|
||||
| Aspect | Status |
|
||||
|---------|--------|
|
||||
| Hook definitions | ✅ Yes - in `flake.nix` |
|
||||
| Hook installation | ✅ Yes - via `nix flake check` |
|
||||
| CI integration | ✅ Yes - via `nix flake check` in workflows |
|
||||
| Local git hooks | ✅ Yes - run automatically on commit |
|
||||
| No systemd services | ✅ Removed - too complex |
|
||||
| No activation scripts | ✅ Removed - unnecessary |
|
||||
| One-time setup | ✅ Yes - run `nix flake check` once per host |
|
||||
|
||||
## Files
|
||||
|
||||
| File | Status |
|
||||
|------|--------|
|
||||
| `flake.nix` | ✅ Hook definitions |
|
||||
| `.github/workflows/flake-check.yaml` | ✅ CI uses `nix flake check` |
|
||||
| `.editorconfig` | ✅ Code style |
|
||||
| `modules/git-hooks/default.nix` | ❌ **DELETED** - Not needed |
|
||||
| `hosts/*/default.nix` | ✅ **CLEANED** - Removed git-hooks |
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Test locally:
|
||||
```bash
|
||||
nix flake check
|
||||
ls -la .git/hooks/
|
||||
echo "broken { }" > test.nix
|
||||
git add test.nix
|
||||
git commit -m "test" # Should fail
|
||||
rm test.nix
|
||||
```
|
||||
|
||||
2. Commit changes:
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "Simplify: Git hooks via nix flake check (no systemd, no activation)"
|
||||
git push
|
||||
```
|
||||
|
||||
3. Run `nix flake check` on each host when you next rebuild
|
||||
|
||||
## This Is The Right Approach Because
|
||||
|
||||
| Issue | Overcomplicated Solution | Simple Solution |
|
||||
|-------|----------------------|----------------|
|
||||
| Declarative | ❌ Systemd service is separate from Nix | ✅ Hooks in `flake.nix`, install via `nix flake check` |
|
||||
| Simple | ❌ Multiple layers (activation, systemd) | ✅ One command: `nix flake check` |
|
||||
| Idempotent | ❌ Runs on every rebuild | ✅ Idempotent - run once per host |
|
||||
| Reproducible | ❌ Depends on systemd state | ✅ Pure Nix |
|
||||
|
||||
## Summary
|
||||
|
||||
**The simplest declarative approach:**
|
||||
|
||||
1. Define hooks in `flake.nix` ✅ Already done
|
||||
2. Run `nix flake check` once per host ✅ To do
|
||||
3. That's it! Hooks work automatically ✅ Declarative
|
||||
|
||||
No systemd services. No activation scripts. No complexity.
|
||||
84
flake.lock
generated
84
flake.lock
generated
@@ -53,11 +53,11 @@
|
||||
},
|
||||
"locked": {
|
||||
"dir": "pkgs/firefox-addons",
|
||||
"lastModified": 1770091431,
|
||||
"narHash": "sha256-9Sqq/hxq8ZDLRSzu+edn0OfWG+FAPWFpwMKaJobeLec=",
|
||||
"lastModified": 1770177820,
|
||||
"narHash": "sha256-0iGDl/ct3rW+h6+sLq4RZaze/U/aQo2L5sLLuyjuVTk=",
|
||||
"owner": "rycee",
|
||||
"repo": "nur-expressions",
|
||||
"rev": "4f827ff035c6ddc58d04c45abe5b777d356b926a",
|
||||
"rev": "c7794d3f46304de5234008c31b5b28a9d5709184",
|
||||
"type": "gitlab"
|
||||
},
|
||||
"original": {
|
||||
@@ -83,6 +83,22 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-compat_2": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1767039857,
|
||||
"narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=",
|
||||
"owner": "NixOS",
|
||||
"repo": "flake-compat",
|
||||
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"repo": "flake-compat",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-parts": {
|
||||
"inputs": {
|
||||
"nixpkgs-lib": [
|
||||
@@ -138,6 +154,49 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"git-hooks": {
|
||||
"inputs": {
|
||||
"flake-compat": "flake-compat_2",
|
||||
"gitignore": "gitignore",
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1769939035,
|
||||
"narHash": "sha256-Fok2AmefgVA0+eprw2NDwqKkPGEI5wvR+twiZagBvrg=",
|
||||
"owner": "cachix",
|
||||
"repo": "git-hooks.nix",
|
||||
"rev": "a8ca480175326551d6c4121498316261cbb5b260",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "cachix",
|
||||
"repo": "git-hooks.nix",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"gitignore": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"git-hooks",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1709087332,
|
||||
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
|
||||
"owner": "hercules-ci",
|
||||
"repo": "gitignore.nix",
|
||||
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "hercules-ci",
|
||||
"repo": "gitignore.nix",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"home-manager": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
@@ -145,11 +204,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1769978395,
|
||||
"narHash": "sha256-gj1yP3spUb1vGtaF5qPhshd2j0cg4xf51pklDsIm19Q=",
|
||||
"lastModified": 1770263241,
|
||||
"narHash": "sha256-R1WFtIvp38hS9x63dnijdJw1KyIiy30KGea6e6N7LHs=",
|
||||
"owner": "nix-community",
|
||||
"repo": "home-manager",
|
||||
"rev": "984708c34d3495a518e6ab6b8633469bbca2f77a",
|
||||
"rev": "04e5203db66417d548ae1ff188a9f591836dfaa7",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -321,11 +380,11 @@
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1770019141,
|
||||
"narHash": "sha256-VKS4ZLNx4PNrABoB0L8KUpc1fE7CLpQXQs985tGfaCU=",
|
||||
"lastModified": 1770197578,
|
||||
"narHash": "sha256-AYqlWrX09+HvGs8zM6ebZ1pwUqjkfpnv8mewYwAo+iM=",
|
||||
"owner": "nixos",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "cb369ef2efd432b3cdf8622b0ffc0a97a02f3137",
|
||||
"rev": "00c21e4c93d963c50d4c0c89bfa84ed6e0694df2",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -579,6 +638,7 @@
|
||||
"colmena": "colmena",
|
||||
"disko": "disko",
|
||||
"firefox-addons": "firefox-addons",
|
||||
"git-hooks": "git-hooks",
|
||||
"home-manager": "home-manager",
|
||||
"nix-on-droid": "nix-on-droid",
|
||||
"nix-secrets": "nix-secrets",
|
||||
@@ -612,11 +672,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1770110318,
|
||||
"narHash": "sha256-NUVGVtYBTC96WhPh4Y3SVM7vf0o1z5W4uqRBn9v1pfo=",
|
||||
"lastModified": 1770145881,
|
||||
"narHash": "sha256-ktjWTq+D5MTXQcL9N6cDZXUf9kX8JBLLBLT0ZyOTSYY=",
|
||||
"owner": "Mic92",
|
||||
"repo": "sops-nix",
|
||||
"rev": "f990b0a334e96d3ef9ca09d4bd92778b42fd84f9",
|
||||
"rev": "17eea6f3816ba6568b8c81db8a4e6ca438b30b7c",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
||||
28
flake.nix
28
flake.nix
@@ -43,6 +43,10 @@
|
||||
url = "github:zhaofengli/colmena";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
git-hooks = {
|
||||
url = "github:cachix/git-hooks.nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
};
|
||||
|
||||
outputs =
|
||||
@@ -52,6 +56,7 @@
|
||||
home-manager,
|
||||
nix-on-droid,
|
||||
nixgl,
|
||||
git-hooks,
|
||||
...
|
||||
}@inputs:
|
||||
let
|
||||
@@ -61,11 +66,13 @@
|
||||
hostDirNames = utils.dirNames ./hosts;
|
||||
system = "x86_64-linux";
|
||||
dotsPath = ./dots;
|
||||
pkgs = import nixpkgs { inherit system; };
|
||||
in
|
||||
{
|
||||
nix.nixPath = [
|
||||
"nixpkgs=${inputs.nixpkgs}"
|
||||
]; # <https://github.com/nix-community/nixd/blob/main/nixd/docs/configuration.md>
|
||||
];
|
||||
|
||||
nixosConfigurations =
|
||||
(lib.genAttrs hostDirNames (
|
||||
host:
|
||||
@@ -73,7 +80,12 @@
|
||||
system = import ./hosts/${host}/system.nix;
|
||||
modules = [ ./hosts/${host} ];
|
||||
specialArgs = {
|
||||
inherit inputs outputs dotsPath;
|
||||
inherit
|
||||
inputs
|
||||
outputs
|
||||
dotsPath
|
||||
self
|
||||
;
|
||||
};
|
||||
}
|
||||
))
|
||||
@@ -94,6 +106,7 @@
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
homeConfigurations = {
|
||||
work = home-manager.lib.homeManagerConfiguration {
|
||||
pkgs = import nixpkgs {
|
||||
@@ -106,7 +119,7 @@
|
||||
};
|
||||
};
|
||||
};
|
||||
# https://github.com/nix-community/nix-on-droid/blob/master/templates/advanced/flake.nix
|
||||
|
||||
nixOnDroidConfigurations = {
|
||||
pixel = nix-on-droid.lib.nixOnDroidConfiguration {
|
||||
modules = [ ./phone ];
|
||||
@@ -128,6 +141,15 @@
|
||||
;
|
||||
};
|
||||
|
||||
checks.${system}.pre-commit-check = git-hooks.lib.${system}.run {
|
||||
src = ./.;
|
||||
hooks = {
|
||||
nixfmt.enable = true;
|
||||
statix.enable = true;
|
||||
deadnix.enable = true;
|
||||
};
|
||||
};
|
||||
|
||||
images.sd-image-aarch64 = self.nixosConfigurations.sd-image-aarch64.config.system.build.sdImage;
|
||||
};
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ in
|
||||
../../modules/git
|
||||
../../modules/k8s/k9s.nix
|
||||
../../modules/kitty.nix
|
||||
../../modules/nfc
|
||||
../../modules/nvim.nix
|
||||
../../modules/ssh.nix
|
||||
../../modules/taskwarrior.nix
|
||||
@@ -38,6 +39,7 @@ in
|
||||
cloud.hetzner.enable = true;
|
||||
comms.signal.enable = true;
|
||||
github.enable = true;
|
||||
nfc.proxmark3.enable = true;
|
||||
|
||||
shell.bash = {
|
||||
enable = true;
|
||||
|
||||
5
home/modules/nfc/default.nix
Normal file
5
home/modules/nfc/default.nix
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
imports = [
|
||||
./proxmark3.nix
|
||||
];
|
||||
}
|
||||
21
home/modules/nfc/proxmark3.nix
Normal file
21
home/modules/nfc/proxmark3.nix
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
cfg = config.nfc.proxmark3;
|
||||
in
|
||||
{
|
||||
options.nfc.proxmark3 = {
|
||||
enable = lib.mkEnableOption "proxmark3 (iceman fork)";
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
home.packages = [
|
||||
pkgs.proxmark3
|
||||
];
|
||||
};
|
||||
}
|
||||
@@ -2,6 +2,7 @@
|
||||
lib,
|
||||
inputs,
|
||||
outputs,
|
||||
self,
|
||||
config,
|
||||
pkgs,
|
||||
...
|
||||
@@ -28,7 +29,7 @@ in
|
||||
../../modules/desktops/niri
|
||||
../../modules/backups
|
||||
../../modules/bluetooth
|
||||
../../modules/keyboard
|
||||
../../modules/## modules/keyboard
|
||||
(import ../../modules/networking { inherit hostName; })
|
||||
../../modules/users
|
||||
../../modules/audio
|
||||
@@ -107,33 +108,103 @@ in
|
||||
enable = true;
|
||||
harden = true;
|
||||
};
|
||||
|
||||
locate = {
|
||||
enable = true;
|
||||
package = pkgs.plocate;
|
||||
};
|
||||
};
|
||||
|
||||
my.syncthing = {
|
||||
networking.hostName = hostName;
|
||||
|
||||
ssh.username = username;
|
||||
ssh.authorizedHosts = [ "astyanax" ];
|
||||
|
||||
secrets.username = username;
|
||||
docker.user = username;
|
||||
|
||||
nix.settings.secret-key-files = [ config.sops.secrets.nix_signing_key_andromache.path ];
|
||||
|
||||
disko.devices = {
|
||||
disk.data = {
|
||||
type = "disk";
|
||||
device = "/dev/nvme0n1";
|
||||
content = {
|
||||
type = "gpt";
|
||||
partitions = {
|
||||
data = {
|
||||
size = "100%";
|
||||
content = {
|
||||
type = "filesystem";
|
||||
format = "ext4";
|
||||
mountpoint = "/data";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
hardware = {
|
||||
cpu.intel.updateMicrocode = true;
|
||||
graphics.enable = true;
|
||||
nvidia = {
|
||||
modesetting.enable = true;
|
||||
powerManagement.enable = true;
|
||||
powerManagement.finegrained = false;
|
||||
open = true;
|
||||
nvidiaSettings = true;
|
||||
package = config.boot.kernelPackages.nvidiaPackages.stable;
|
||||
};
|
||||
};
|
||||
|
||||
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
|
||||
|
||||
environment.systemPackages = [
|
||||
inputs.colmena.packages.${pkgs.system}.colmena
|
||||
];
|
||||
|
||||
services = {
|
||||
git-hooks = {
|
||||
enable = true;
|
||||
deviceNames = [
|
||||
"boox"
|
||||
"astyanax"
|
||||
];
|
||||
folders = {
|
||||
readings = {
|
||||
path = "/home/h/doc/readings";
|
||||
id = "readings";
|
||||
devices = [
|
||||
{
|
||||
device = "boox";
|
||||
type = "receiveonly";
|
||||
}
|
||||
"astyanax"
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
xserver = {
|
||||
videoDrivers = [ "nvidia" ];
|
||||
};
|
||||
|
||||
openssh = {
|
||||
enable = true;
|
||||
harden = true;
|
||||
};
|
||||
|
||||
locate = {
|
||||
enable = true;
|
||||
package = pkgs.plocate;
|
||||
};
|
||||
};
|
||||
|
||||
# my.syncthing = {
|
||||
# enable = true;
|
||||
# deviceNames = [
|
||||
# "boox"
|
||||
# "astyanax"
|
||||
# ];
|
||||
# folders = {
|
||||
# readings = {
|
||||
# path = "/home/h/doc/readings";
|
||||
# id = "readings";
|
||||
# devices = [
|
||||
# {
|
||||
# device = "boox";
|
||||
# type = "receiveonly";
|
||||
# }
|
||||
# "astyanax"
|
||||
# ];
|
||||
# };
|
||||
# };
|
||||
# };
|
||||
|
||||
networking = {
|
||||
# TODO: generate unique hostId on actual host with: head -c 8 /etc/machine-id
|
||||
hostId = "80eef97e";
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
lib,
|
||||
inputs,
|
||||
outputs,
|
||||
self,
|
||||
config,
|
||||
pkgs,
|
||||
...
|
||||
@@ -97,35 +98,14 @@ in
|
||||
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
|
||||
|
||||
services = {
|
||||
git-hooks = {
|
||||
enable = true;
|
||||
};
|
||||
fwupd.enable = true;
|
||||
openssh = {
|
||||
enable = true;
|
||||
harden = true;
|
||||
};
|
||||
};
|
||||
|
||||
my.syncthing = {
|
||||
enable = true;
|
||||
deviceNames = [
|
||||
"boox"
|
||||
"andromache"
|
||||
];
|
||||
folders = {
|
||||
readings = {
|
||||
path = "/home/h/doc/readings";
|
||||
id = "readings";
|
||||
devices = [
|
||||
{
|
||||
device = "boox";
|
||||
type = "receiveonly";
|
||||
}
|
||||
"andromache"
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
services = {
|
||||
locate = {
|
||||
enable = true;
|
||||
package = pkgs.plocate;
|
||||
|
||||
@@ -20,6 +20,13 @@ in
|
||||
"astyanax"
|
||||
];
|
||||
|
||||
ssh.username = username;
|
||||
ssh.publicHostname = "eetion";
|
||||
ssh.authorizedHosts = [
|
||||
"andromache"
|
||||
"astyanax"
|
||||
];
|
||||
|
||||
boot.loader = {
|
||||
grub.enable = false;
|
||||
generic-extlinux-compatible.enable = true;
|
||||
|
||||
71
hosts/hecuba/UPTIME_PLAN.md
Normal file
71
hosts/hecuba/UPTIME_PLAN.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Hecuba uptime server plan
|
||||
|
||||
## Current State
|
||||
|
||||
- Hecuba is a Hetzner cloud host running NixOS
|
||||
- Docker is enabled for user `username`
|
||||
- Firewall allows ports 80 and 443
|
||||
- No existing uptime monitoring
|
||||
|
||||
## Goals
|
||||
|
||||
Monitor docker containers on hecuba with a self-hosted uptime dashboard
|
||||
|
||||
## Uptime Monitoring Options
|
||||
|
||||
### Option 1: Uptime Kuma (Recommended)
|
||||
|
||||
- Easy to use web dashboard
|
||||
- Docker-based (fits existing setup)
|
||||
- HTTP/TCP/Ping monitoring
|
||||
- Status pages
|
||||
- Notifications (email, Telegram, etc.)
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Evaluate & Choose
|
||||
- [ ] Research uptime monitoring solutions $id{11c06cf8-bea2-4858-9c7f-a293c3e8fba5}
|
||||
- [ ] Decide on solution (Uptime Kuma likely best fit) $id{f87debaa-312e-424e-80e0-b624f0768774}
|
||||
|
||||
### Phase 2: Docker Setup
|
||||
- [ ] Add uptime monitoring container to hecuba $id{7d8c5bf4-3d49-4f4c-87f1-1f34c5a4dbec}
|
||||
- [ ] Configure persistent storage $id{9568b276-2885-4ae7-b5ca-5a9d7efb6a69}
|
||||
- [ ] Set up reverse proxy (ports 80/443 already open) $id{c2f6ea85-f5e3-465d-95ba-62738a97da80}
|
||||
- [ ] Configure SSL certificate $id{95c257e2-931b-44da-b0b1-a3e088956800}
|
||||
|
||||
### Phase 3: Configuration
|
||||
- [ ] Add docker containers to monitor $id{4670deda-70d2-4c37-8121-2035aa7d57fb}
|
||||
- [ ] Set up alert thresholds $id{da6acf90-0b62-4451-bb11-4f74c5c5dd27}
|
||||
- [ ] Configure notifications (email/Telegram) $id{0b188adf-9a27-4499-9a19-b1ebd081bd21}
|
||||
- [ ] Test monitoring $id{dd0df63f-5da2-4ba0-a386-45162a2bb642}
|
||||
|
||||
### Phase 4: Maintenance
|
||||
- [ ] Add to backup routine $id{33a2c381-94cb-460e-b600-67cb503826d7}
|
||||
- [ ] Document monitoring setup $id{f3bf7b85-737f-4511-8d3e-a270044abea3}
|
||||
- [ ] Review and adjust alerts $id{32e46c53-dd9d-48a8-aef2-985ebaadd8da}
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Storage Location
|
||||
`/var/lib/uptime-kuma` or similar persistent volume
|
||||
|
||||
### Docker Compose Structure
|
||||
```yaml
|
||||
services:
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma:1
|
||||
volumes:
|
||||
- /var/lib/uptime-kuma:/app/data
|
||||
ports:
|
||||
- 3001:3001
|
||||
restart: always
|
||||
```
|
||||
|
||||
### NixOS Integration
|
||||
- Consider using `virtualisation.oci-containers` for declarative setup
|
||||
- Or keep docker-compose file (more flexible for updates)
|
||||
|
||||
## Next Steps
|
||||
1. Pick uptime monitoring solution
|
||||
2. Decide on deployment method (NixOS declarative vs docker-compose)
|
||||
3. Implement
|
||||
105
modules/backups/cloud-hosts.nix
Normal file
105
modules/backups/cloud-hosts.nix
Normal file
@@ -0,0 +1,105 @@
|
||||
{
|
||||
lib,
|
||||
config,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
cfg = config.cloud-host-backup;
|
||||
in
|
||||
{
|
||||
options = {
|
||||
cloud-host-backup = {
|
||||
enable = lib.mkEnableOption "pull backups from cloud hosts via SFTP";
|
||||
|
||||
hosts = lib.mkOption {
|
||||
type = lib.types.attrsOf (
|
||||
lib.types.submodule {
|
||||
options = {
|
||||
hostname = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "SSH hostname of the cloud host";
|
||||
};
|
||||
username = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = config.secrets.username;
|
||||
description = "SSH username for the cloud host";
|
||||
};
|
||||
remotePath = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "/home";
|
||||
description = "Remote path to backup";
|
||||
};
|
||||
excludePatterns = lib.mkOption {
|
||||
type = lib.types.listOf lib.types.str;
|
||||
description = "Exclude patterns for restic";
|
||||
default = [ ];
|
||||
};
|
||||
};
|
||||
}
|
||||
);
|
||||
default = { };
|
||||
example = {
|
||||
andromache = {
|
||||
hostname = "andromache.local";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
b2Bucket = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "B2 bucket name";
|
||||
};
|
||||
|
||||
passwordFile = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = config.sops.secrets."restic_password".path;
|
||||
};
|
||||
|
||||
sshKeyFile = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "/home/${config.secrets.username}/.ssh/id_ed25519";
|
||||
description = "SSH private key file for authentication";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
sops.templates = lib.mapAttrs' (
|
||||
hostName: hostCfg:
|
||||
lib.nameValuePair "restic/repo-cloud-${hostName}" {
|
||||
content = "b2:${config.sops.placeholder."b2_bucket_name"}:${hostName}/";
|
||||
}
|
||||
) cfg.hosts;
|
||||
|
||||
services.restic.backups = lib.mapAttrs' (
|
||||
hostName: hostCfg:
|
||||
lib.nameValuePair "cloud-${hostName}" {
|
||||
repositoryFile = config.sops.templates."restic/repo-cloud-${hostName}".path;
|
||||
passwordFile = cfg.passwordFile;
|
||||
paths = [ "sftp:${hostCfg.username}@${hostCfg.hostname}:${hostCfg.remotePath}" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "daily";
|
||||
Persistent = true;
|
||||
};
|
||||
initialize = true;
|
||||
extraBackupArgs = [
|
||||
"--one-file-system"
|
||||
]
|
||||
++ lib.optional (hostCfg.excludePatterns != [ ]) (
|
||||
builtins.concatStringsSep " " (map (p: "--exclude ${p}") hostCfg.excludePatterns)
|
||||
);
|
||||
pruneOpts = [
|
||||
"--keep-daily 7"
|
||||
"--keep-weekly 4"
|
||||
"--keep-monthly 6"
|
||||
"--keep-yearly 1"
|
||||
];
|
||||
environmentFile = config.sops.templates."restic/b2-env".path;
|
||||
extraOptions = [
|
||||
"sftp.command=ssh -i ${cfg.sshKeyFile} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
|
||||
];
|
||||
}
|
||||
) cfg.hosts;
|
||||
};
|
||||
}
|
||||
67
modules/cloudflare-dns/README.md
Normal file
67
modules/cloudflare-dns/README.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Cloudflare DNS Module
|
||||
|
||||
Declarative DNS management for Cloudflare using `flarectl`.
|
||||
|
||||
## Usage
|
||||
|
||||
Add to your host configuration:
|
||||
```nix
|
||||
{
|
||||
imports = [
|
||||
../../modules/cloudflare-dns
|
||||
];
|
||||
|
||||
cloudflare-dns = {
|
||||
enable = true;
|
||||
apiToken = "YOUR_CLOUDFLARE_API_TOKEN";
|
||||
zoneId = "YOUR_ZONE_ID";
|
||||
|
||||
records = [
|
||||
{
|
||||
name = "uptime";
|
||||
type = "A";
|
||||
content = "YOUR_SERVER_IP";
|
||||
proxied = true;
|
||||
}
|
||||
{
|
||||
name = "monitoring";
|
||||
type = "CNAME";
|
||||
content = "uptime.example.com";
|
||||
proxied = true;
|
||||
}
|
||||
];
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Getting Your API Token
|
||||
|
||||
1. Go to https://dash.cloudflare.com/profile/api-tokens
|
||||
2. Click "Create Token"
|
||||
3. Use "Edit zone DNS" template
|
||||
4. Select your zone (domain)
|
||||
5. Copy the token
|
||||
|
||||
## Getting Your Zone ID
|
||||
|
||||
1. Go to https://dash.cloudflare.com
|
||||
2. Click on your domain
|
||||
3. Look for "Zone ID" on the right sidebar
|
||||
4. Copy the ID
|
||||
|
||||
## Options
|
||||
|
||||
- `apiToken` - Cloudflare API token (required)
|
||||
- `zoneId` - Cloudflare zone ID (required)
|
||||
- `records` - List of DNS records to manage
|
||||
- `name` - Record name (e.g., "uptime" for uptime.example.com)
|
||||
- `type` - Record type (A, AAAA, CNAME, etc., default: A)
|
||||
- `content` - Record content (IP address, hostname, etc.)
|
||||
- `proxied` - Use Cloudflare proxy (default: true)
|
||||
- `ttl` - TTL value (1 = auto, default: 1)
|
||||
|
||||
## Usage Notes
|
||||
|
||||
- Records are updated on system activation
|
||||
- Use `sudo systemctl start cloudflare-dns-update` to manually update
|
||||
- API token should be stored securely (consider using sops-nix)
|
||||
92
modules/cloudflare-dns/default.nix
Normal file
92
modules/cloudflare-dns/default.nix
Normal file
@@ -0,0 +1,92 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
cfg = config.cloudflare-dns;
|
||||
in
|
||||
{
|
||||
options.cloudflare-dns = {
|
||||
enable = lib.mkEnableOption "Cloudflare DNS management via flarectl";
|
||||
|
||||
apiToken = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "Cloudflare API token";
|
||||
};
|
||||
|
||||
zoneId = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "Cloudflare zone ID (from your domain's Cloudflare page)";
|
||||
};
|
||||
|
||||
records = lib.mkOption {
|
||||
type = lib.types.listOf (
|
||||
lib.types.submodule {
|
||||
options = {
|
||||
name = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "DNS record name (e.g., 'uptime' for uptime.example.com)";
|
||||
};
|
||||
type = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "A";
|
||||
description = "DNS record type (A, AAAA, CNAME, etc.)";
|
||||
};
|
||||
content = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "DNS record content (IP address, hostname, etc.)";
|
||||
};
|
||||
proxied = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
default = true;
|
||||
description = "Use Cloudflare proxy (orange cloud)";
|
||||
};
|
||||
ttl = lib.mkOption {
|
||||
type = lib.types.int;
|
||||
default = 1;
|
||||
description = "TTL (1 = auto)";
|
||||
};
|
||||
};
|
||||
}
|
||||
);
|
||||
default = [ ];
|
||||
description = "List of DNS records to manage";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.flarectl ];
|
||||
|
||||
systemd.services.cloudflare-dns-update = {
|
||||
description = "Update Cloudflare DNS records";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
Environment = [ "CF_API_TOKEN=${cfg.apiToken}" ];
|
||||
};
|
||||
script = ''
|
||||
${lib.concatMapStringsSep "\n" (record: ''
|
||||
echo "Updating DNS record: ${record.name} (${record.type}) -> ${record.content}"
|
||||
${pkgs.flarectl}/bin/flarectl \
|
||||
--zone ${cfg.zoneId} \
|
||||
add \
|
||||
--name ${record.name} \
|
||||
--type ${record.type} \
|
||||
--content ${record.content} \
|
||||
--proxied ${toString record.proxied} \
|
||||
--ttl ${toString record.ttl} || \
|
||||
${pkgs.flarectl}/bin/flarectl \
|
||||
--zone ${cfg.zoneId} \
|
||||
update \
|
||||
--id $(${pkgs.flarectl}/bin/flarectl --zone ${cfg.zoneId} --name ${record.name} --type ${record.type} | grep -oP '(?<=ID:\s)\S+' | head -1) \
|
||||
--content ${record.content} \
|
||||
--proxied ${toString record.proxied} \
|
||||
--ttl ${toString record.ttl}
|
||||
'') cfg.records}
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
35
modules/git-hooks/default.nix
Normal file
35
modules/git-hooks/default.nix
Normal file
@@ -0,0 +1,35 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
options.services.git-hooks = {
|
||||
enable = lib.mkEnableOption "Install git hooks for Nix flake";
|
||||
install = lib.mkOption {
|
||||
type = lib.types.nullOr (lib.types.path);
|
||||
default = null;
|
||||
description = "Install git hooks once (run `nix flake check`)";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf config.services.git-hooks.enable {
|
||||
system.activationScripts.install-git-hooks = lib.stringAfter [ "users" ] ''
|
||||
${lib.getExe pkgs.nix} build /home/h/nix/.#pre-commit-check 2>&1 || true
|
||||
echo "✅ Git hooks installed"
|
||||
'';
|
||||
|
||||
environment.systemPackages = lib.singleton (
|
||||
pkgs.writeShellApplication {
|
||||
name = "install-git-hooks";
|
||||
runtimeInputs = [ pkgs.git ];
|
||||
text = ''
|
||||
${lib.getExe pkgs.nix} build /home/h/nix/.#pre-commit-check || echo "⚠️ Hook installation had issues"
|
||||
echo "✅ Done"
|
||||
'';
|
||||
}
|
||||
);
|
||||
};
|
||||
}
|
||||
39
modules/uptime-kuma/default.nix
Normal file
39
modules/uptime-kuma/default.nix
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
cfg = config.my.uptime-kuma;
|
||||
in
|
||||
{
|
||||
options.my.uptime-kuma.enable = lib.mkEnableOption "Uptime Kuma monitoring service (Docker container)";
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
virtualisation.oci-containers = {
|
||||
backend = "docker";
|
||||
containers.uptime-kuma = {
|
||||
image = "louislam/uptime-kuma:latest";
|
||||
ports = [ "127.0.0.1:3001:3001" ];
|
||||
volumes = [ "/var/lib/uptime-kuma:/app/data" ];
|
||||
environment = {
|
||||
TZ = "UTC";
|
||||
UMASK = "0022";
|
||||
};
|
||||
extraOptions = [
|
||||
"--network=proxiable"
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
systemd.tmpfiles.settings."uptime-kuma" = {
|
||||
"/var/lib/uptime-kuma".d = {
|
||||
mode = "0755";
|
||||
};
|
||||
};
|
||||
|
||||
environment.systemPackages = with pkgs; [ docker-compose ];
|
||||
};
|
||||
}
|
||||
Reference in New Issue
Block a user