asy1um

Feedback Loop

Overview

The Project Asylum feedback loop is the core of the system’s self-adapting capabilities. It continuously monitors honeypot activity, analyzes attacker behavior using AI/ML models, and automatically adjusts infrastructure to optimize security and deception.

Components

1. Data Collection

2. Analysis Pipeline

3. Decision Engine

4. Infrastructure Adaptation

5. Model Retraining

Decision Thresholds

Anomaly Rate Thresholds

Response Times

Workflow Diagram

graph TD
    A[Honeypot Logs] --> B[Logstash]
    B --> C[Elasticsearch]
    C --> D[Scheduler]
    D --> E[Fetch & Extract Features]
    E --> F[AI Analysis]
    F --> G{Severity?}
    G -->|Critical| H[Immediate Action]
    G -->|High| I[Scheduled Action]
    G -->|Medium| J[Monitor]
    G -->|Low| K[Log Only]
    H --> L[Orchestration API]
    I --> L
    L --> M{Action Type}
    M -->|Scale| N[Update Terraform]
    M -->|Rotate| O[Rotate Honeypots]
    M -->|Alert| P[Send Notifications]
    N --> Q[Apply Changes]
    O --> Q
    Q --> R[Update State]
    R --> S[Feedback to AI]
    S --> F

Configuration

Environment Variables

# Analysis frequency (cron format)
ANALYSIS_INTERVAL="*/15 * * * *"  # Every 15 minutes

# Infrastructure drift check
DRIFT_CHECK_INTERVAL="0 */6 * * *"  # Every 6 hours

# Model retraining schedule
MODEL_RETRAIN_INTERVAL="0 2 * * *"  # Daily at 2 AM

# Anomaly thresholds
ANOMALY_THRESHOLD_CRITICAL=0.5
ANOMALY_THRESHOLD_HIGH=0.2
ANOMALY_THRESHOLD_MEDIUM=0.1

Terraform Auto-Apply Settings

For production environments, it’s recommended to require manual approval:

# terraform/main.tf
lifecycle {
  prevent_destroy = true
}

Enable auto-apply only for development:

export TF_AUTO_APPROVE=true  # Development only

Monitoring the Feedback Loop

Prometheus Metrics

Grafana Dashboards

Kibana Queries

{
  "query": {
    "bool": {
      "filter": [
        { "range": { "@timestamp": { "gte": "now-1h" } } },
        { "term": { "event_category": "feedback_loop" } }
      ]
    }
  }
}

Manual Intervention

Pausing the Feedback Loop

# Stop the scheduler
docker-compose stop scheduler

# Or disable auto-actions via environment
export FEEDBACK_LOOP_ENABLED=false

Reviewing Pending Changes

# Check Terraform plan
cd terraform
terraform plan

# Review AI recommendations
curl http://localhost:8000/state

Manual Trigger

# Trigger immediate analysis
curl -X POST http://localhost:3001/events \
  -H "Content-Type: application/json" \
  -d '{
    "type": "manual_analysis",
    "source": "admin",
    "data": {}
  }'

Best Practices

  1. Start Conservative: Begin with manual approval for all infrastructure changes
  2. Monitor Closely: Watch the first 48 hours of automated operation
  3. Set Limits: Configure maximum node count and budget limits in Terraform
  4. Version Control: All Terraform changes should be committed and reviewed
  5. Alerting: Set up notifications for critical severity events
  6. Backup State: Regularly backup Terraform state and AI models
  7. Test First: Validate feedback loop in development environment

Troubleshooting

Feedback Loop Not Triggering

Too Many False Positives

Infrastructure Not Updating

Security Considerations

  1. Credentials: Never commit AWS/GCP keys or secrets
  2. API Access: Restrict orchestration API to internal network
  3. Approval Gates: Require human approval for production changes
  4. Audit Trail: Log all automated decisions and changes
  5. Rate Limiting: Prevent infinite scaling loops
  6. Rollback Plan: Maintain ability to quickly revert changes