Getting Started with ANSAI¶
From zero to AI-powered automation in 5 minutes.
๐ฏ What You'll Build¶
By the end of this guide, you'll have:
โ
ANSAI installed and configured
โ
AI backend running (LiteLLM or Fabric)
โ
Your first AI-powered automation deployed
โ
Self-healing services with intelligent diagnostics
Time required: 5-10 minutes
๐ Prerequisites¶
Before you start, you'll need:
- Linux or macOS (Windows via WSL)
- Python 3.9+ (
python3 --version) - Git (
git --version) - SSH access to a server (for deployment)
Optional but recommended:
- Ansible (pip3 install ansible)
- At least one API key: OpenAI, Anthropic, Groq, or local Ollama
๐ Step 1: Install ANSAI (1 minute)¶
Quick Install (One-Liner)¶
Or manual install:
# Clone repository
git clone https://github.com/thebyrdman-git/ansai.git ~/.ansai
# Add to PATH
echo 'export PATH="$HOME/.ansai/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Verify installation
ansai-progress-tracker --help
What this does:
- Clones ANSAI to ~/.ansai
- Adds ANSAI to your PATH
- Creates config directories
- Checks prerequisites
โ You should see: Welcome message and next steps
๐ค Step 2: Set Up AI Backend (2 minutes)¶
ANSAI requires AI to be useful. Choose your AI backend:
Option A: LiteLLM (Multi-Model Routing) โญ Recommended¶
Best for: Cost optimization, multi-provider setup
# Install LiteLLM
pip3 install 'litellm[proxy]'
# Create config
mkdir -p ~/.config/ansai
cat > ~/.config/ansai/litellm_config.yaml << 'EOF'
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
max_tokens: 4096
EOF
# Set API key
export OPENAI_API_KEY="your-api-key-here"
# Start LiteLLM proxy
litellm --config ~/.config/ansai/litellm_config.yaml --port 4000 &
Verify it's working:
Option B: Fabric (Text Processing Patterns)¶
Best for: Log analysis, text transformation
# Install Fabric
pipx install fabric-ai
# Set up patterns
fabric --setup
# Test it
echo "This is a test log entry" | fabric -p summarize
Option C: Local Ollama (No API Key Required)¶
Best for: Privacy, no costs, offline use
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3
# Configure LiteLLM to use Ollama
cat > ~/.config/ansai/litellm_config.yaml << 'EOF'
model_list:
- model_name: local-llama3
litellm_params:
model: ollama/llama3
api_base: http://localhost:11434
api_key: "sk-ollama" # Dummy key
EOF
# Start LiteLLM
litellm --config ~/.config/ansai/litellm_config.yaml --port 4000 &
โ You should see: AI backend running at http://localhost:4000
โ๏ธ Step 3: Configure Environment (1 minute)¶
# Set your email for alerts
export ANSAI_ADMIN_EMAIL="[email protected]"
# Configure SMTP (Gmail example)
export ANSAI_SMTP_SERVER="smtp.gmail.com"
export ANSAI_SMTP_PORT="587"
export ANSAI_SMTP_USER="[email protected]"
export ANSAI_SMTP_PASSWORD="your-app-password"
# Save to your shell profile
cat >> ~/.bashrc << 'EOF'
# ANSAI Configuration
export ANSAI_ADMIN_EMAIL="[email protected]"
export ANSAI_SMTP_SERVER="smtp.gmail.com"
export ANSAI_SMTP_PORT="587"
export ANSAI_SMTP_USER="[email protected]"
export ANSAI_SMTP_PASSWORD="your-app-password"
EOF
๐ง Gmail Users: - Use an App Password, not your regular password - Enable 2FA first, then generate app password
โ You should see: No errors when exporting variables
๐ฏ Step 4: Configure Your Target Server (1 minute)¶
# Create Ansible inventory
cat > ~/.ansai/orchestrators/ansible/inventory/hosts.yml << 'EOF'
all:
children:
servers:
hosts:
my-server:
ansible_host: 192.168.1.100 # Your server IP
ansible_user: your-username # Your SSH user
ansible_become: true
ansible_python_interpreter: /usr/bin/python3
EOF
Test SSH connectivity:
โ
You should see: "ping": "pong" response
๐ Step 5: Deploy AI-Powered Monitoring (1 minute)¶
cd ~/.ansai/orchestrators/ansible
# Deploy to your server
ansible-playbook playbooks/deploy-ai-powered-monitoring.yml -i inventory/hosts.yml
# Watch the magic happen...
What this deploys: - โ Universal self-healing for systemd services - โ AI-powered root cause analysis - โ Email alerts with diagnostics - โ Automatic remediation for common failures
โ You should see:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
AI-Powered Monitoring Deployed Successfully! โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐งช Step 6: Test It! (1 minute)¶
Test 1: Trigger Self-Healing¶
# SSH to your server
ssh [email protected]
# Stop a monitored service
sudo systemctl stop sshd
# Watch self-healing in action
sudo tail -f /var/log/self-heal/sshd.log
What happens: 1. Service stops 2. ANSAI detects failure 3. AI analyzes logs for root cause 4. Service auto-restarts 5. You receive email with diagnostic report
Test 2: Check Status Dashboard¶
You'll see:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ System Health Status - miraclemax โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ข sshd Active since 2025-11-18 (self-healed)
๐ข cron Active since 2025-11-15
System Overview:
CPU Load: 0.25
Memory: 45%
Disk: 60%
Test 3: Review Email Alert¶
Check your email (ANSAI_ADMIN_EMAIL). You should receive:
Subject: [ANSAI] Service Healed: sshd on my-server
๐ค AI Root Cause Analysis:
Primary: Service stopped manually via systemctl
Contributing: None detected
Correlation: No related service failures
โ
Remediation Taken:
1. Restarted service (successful)
2. Verified port 22 accessibility
3. Validated configuration files
๐ Current Status: HEALTHY
Next healing window: Immediate (on failure)
๐ก Recommendations:
- Service healthy, no action needed
- Monitor for repeated manual stops
โ Success! You now have AI-powered self-healing infrastructure.
๐ What You Just Built¶
Congratulations! You now have:
๐ค AI-Powered Automation¶
- Not just "restart on failure"
- Root cause analysis using LLMs
- Intelligent event correlation
- Predictive failure detection
๐ก๏ธ Self-Healing Infrastructure¶
- Automatic service recovery
- Port conflict resolution
- Configuration validation
- Database connectivity checks
๐ง Intelligent Alerts¶
- Detailed diagnostic emails
- AI-analyzed root causes
- Remediation explanations
- Prevention recommendations
๐ Visibility & Control¶
- Real-time status dashboard
- Comprehensive logging
- Healing history tracking
- Manual override capability
๐ Next Steps¶
1. Customize Your Deployment¶
Add more services to monitor:
Edit ~/.ansai/orchestrators/ansible/playbooks/deploy-ai-powered-monitoring.yml:
monitored_services:
- name: "nginx"
description: "Web server"
critical: true
- name: "postgresql"
description: "Database"
critical: true
- name: "redis"
description: "Cache"
critical: false
Re-deploy:
2. Add More Monitoring¶
Deploy complete monitoring stack:
# JavaScript error monitoring for web apps
ansible-playbook playbooks/deploy-js-monitoring.yml
# CSS error monitoring
ansible-playbook playbooks/deploy-css-monitoring.yml
# External monitoring (Healthchecks.io)
export HEALTHCHECK_ENABLED=true
export HEALTHCHECK_PING_URL="https://hc-ping.com/your-uuid"
ansible-playbook playbooks/deploy-healthchecks.yml
See: Complete Monitoring Stack Guide
3. Integrate with Your IDE¶
Using Cursor IDE?
Set up AI-powered automation in your editor:
# Auto-generate .cursorrules on context switch
ansai-context-switch work
# Analyze logs from Cursor terminal
journalctl -u myapp | ansai-fabric logs
# Natural language ops
# In Cursor: "Why is CPU high?"
See: Cursor IDE Integration Guide
4. Explore Example Workflows¶
# Context management
ansai-context-switch personal
# Progress tracking
ansai-progress-tracker
# Secrets management
ansai-vault-read myapp/prod/api-keys
# AI log analysis
ansai-fabric logs < /var/log/syslog
See: ~/.ansai/examples/workflows/
5. Build Your Own¶
ANSAI is a framework, not a product.
Create your own building blocks: - Custom Ansible roles - AI-powered workflows - Monitoring patterns - Automation scripts
Share what you build: https://github.com/thebyrdman-git/ansai/discussions
๐ Learn More¶
Documentation¶
- Full Docs: https://ansai.dev
- Self-Healing: https://ansai.dev/self-healing/
- Integrations: https://ansai.dev/integrations/
- Community: https://github.com/thebyrdman-git/ansai/discussions
Example Use Cases¶
Community¶
- Show & Tell: Share your builds
- Ideas: Request features
- Q&A: Get help
Join: https://github.com/thebyrdman-git/ansai/discussions
๐ Troubleshooting¶
Issue: LiteLLM not starting¶
Solution:
# Check if port 4000 is in use
lsof -i :4000
# Kill existing process
kill -9 $(lsof -t -i :4000)
# Restart LiteLLM
litellm --config ~/.config/ansai/litellm_config.yaml --port 4000
Issue: Ansible playbook fails with "Permission denied"¶
Solution:
# Test SSH connection
ssh [email protected]
# Verify sudo access
ssh [email protected] "sudo whoami"
# Should return: root
# Check ansible.cfg
cat ~/.ansai/orchestrators/ansible/ansible.cfg
Issue: No email alerts received¶
Solution:
# Test SMTP configuration
python3 << 'EOF'
import smtplib
from email.mime.text import MIMEText
import os
msg = MIMEText("Test from ANSAI")
msg['Subject'] = "ANSAI Test Email"
msg['From'] = os.environ['ANSAI_SMTP_USER']
msg['To'] = os.environ['ANSAI_ADMIN_EMAIL']
with smtplib.SMTP(os.environ['ANSAI_SMTP_SERVER'], int(os.environ['ANSAI_SMTP_PORT'])) as server:
server.starttls()
server.login(os.environ['ANSAI_SMTP_USER'], os.environ['ANSAI_SMTP_PASSWORD'])
server.send_message(msg)
print("โ
Email sent successfully!")
EOF
Issue: Services not healing¶
Solution:
# Check self-healing logs
sudo tail -f /var/log/self-heal/*.log
# Verify systemd OnFailure hooks
systemctl show sshd | grep OnFailure
# Manually trigger healing
sudo systemctl start sshd-self-heal.service
# Check email delivery
sudo journalctl -u sshd-self-heal.service
More help: https://ansai.dev/TROUBLESHOOTING/
๐ก Pro Tips¶
- Start with one service: Monitor sshd first, then expand
- Test self-healing manually: Stop services to verify behavior
- Use local models first: Ollama for testing, cloud for production
- Set up external monitoring: Healthchecks.io catches server-down scenarios
- Integrate with your IDE: Makes AI automation part of your workflow
๐ฏ Quick Reference¶
# Installation
curl -sSL https://raw.githubusercontent.com/thebyrdman-git/ansai/main/install.sh | bash
# Start AI backend
litellm --config ~/.config/ansai/litellm_config.yaml --port 4000 &
# Deploy monitoring
cd ~/.ansai/orchestrators/ansible
ansible-playbook playbooks/deploy-ai-powered-monitoring.yml -i inventory/hosts.yml
# Check status
ssh your-server "sudo /usr/local/bin/miraclemax-status.sh"
# View logs
ssh your-server "sudo tail -f /var/log/self-heal/*.log"
# Test self-healing
ssh your-server "sudo systemctl stop sshd"
Welcome to ANSAI! ๐
AI-powered automation starts now.
Questions? https://github.com/thebyrdman-git/ansai/discussions