Start

Active Inference Curriculum Creation Scripts

This directory contains a comprehensive suite of Python scripts for creating personalized Active Inference curricula. The scripts follow a modular, test-driven development approach and implement the full curriculum generation pipeline from research to translation.

Overview

The curriculum creation process follows these stages:

  1. Research (Scripts 1_*): Analyze domains and target audiences
  2. Content Generation (Script 2_*): Create tailored curriculum content
  3. Visualization (Script 3_*): Generate charts and diagrams
  4. Translation (Script 4_*): Translate to multiple languages

Scripts

1_Research_Domain.py

Purpose: Analyzes domain characteristics to create domain-specific Active Inference curricula.

Key Features:

Usage:

python 1_Research_Domain.py

Input: Domain files from Languages/Inputs_and_Outputs/Domain/Synthetic_*.md Output: Research reports in data/domain_research/

Functions:

1_Research_Entity.py

Purpose: Researches target audience characteristics for personalized curriculum creation.

Key Features:

Usage:

python 1_Research_Entity.py

Input: Entity files from Languages/Inputs_and_Outputs/Entity/*.py Output: Audience research reports in data/audience_research/

Functions:

2_Write_Introduction.py

Purpose: Converts research reports into comprehensive Active Inference curricula.

Key Features:

Usage:

python 2_Write_Introduction.py

Input: Research files from data/domain_research/ and data/audience_research/ Output: Complete curricula in data/written_curriculums/

Functions:

3_Introduction_Visualizations.py

Purpose: Generates PNG charts and Mermaid diagrams for curriculum visualization.

Key Features:

Usage:

python 3_Introduction_Visualizations.py [--input INPUT_DIR] [--output OUTPUT_DIR]

Options:

Outputs:

Functions:

4_Translate_Introductions.py

Purpose: Translates curricula into multiple configured languages.

Key Features:

Usage:

python 4_Translate_Introductions.py [--input INPUT_DIR] [--output OUTPUT_DIR] [--languages LANG1 LANG2 ...]

Options:

Functions:

Data Structure

All scripts follow a consistent data organization pattern:

data/
├── audience_research/          # Entity/audience research reports
│   └── {entity}_research_{timestamp}.json
├── domain_research/           # Domain analysis reports
│   ├── {domain}_research_{timestamp}.json
│   └── {domain}_research_{timestamp}.md
├── written_curriculums/       # Generated curricula
│   └── {entity}/
│       ├── {section}_{timestamp}.md
│       └── complete_curriculum_{timestamp}.md
├── translated_curriculums/    # Translated curricula
│   └── {language}/
│       └── {entity}_curriculum_{language}_{timestamp}.md
└── visualizations/           # Charts and diagrams
    ├── curriculum_metrics.png
    ├── curriculum_structure.mmd
    └── {entity}_flow.mmd

Configuration

Language Configuration

Configure target languages in data/config/languages.yaml:

target_languages:
  - Chinese
  - Spanish
  - Arabic
  - Hindi
  - French
  # ... more languages

script_mappings:
  Arabic: "Modern Standard Arabic"
  Chinese: "Simplified Chinese"
  # ... more mappings

Prompt Templates

Customize prompts in data/prompts/:

Dependencies

Core Dependencies

Visualization Dependencies

Development Dependencies

Environment Setup

  1. Install dependencies:
    uv sync --all-extras --dev
    
  2. Set up environment variables:
    export PERPLEXITY_API_KEY="your-perplexity-key"
    export OPENROUTER_API_KEY="your-openrouter-key"
    
  3. Configure models (optional):
    export PERPLEXITY_MODEL="llama-3.1-sonar-small-128k-online"
    export OPENROUTER_MODEL="anthropic/claude-3.5-sonnet"
    

Usage Examples

Complete Pipeline

Run all scripts in sequence:

# 1. Research domains and entities
python 1_Research_Domain.py
python 1_Research_Entity.py

# 2. Generate curricula
python 2_Write_Introduction.py

# 3. Create visualizations
python 3_Introduction_Visualizations.py

# 4. Translate to target languages
python 4_Translate_Introductions.py

Custom Visualization

Generate visualizations for specific input:

python 3_Introduction_Visualizations.py --input /path/to/curricula --output /path/to/viz

Selective Translation

Translate only to specific languages:

python 4_Translate_Introductions.py --languages Spanish French German

Error Handling & Quality Assurance

All scripts implement comprehensive error handling and quality assurance:

Robust Error Handling

Content Validation

Enhanced User Experience

Testing

The scripts include comprehensive tests:

Run tests:

python tests/test_curriculum_scripts_integration.py

Logging

All scripts use structured logging:

logger = common_setup_logging()
logger.info("Starting process")
logger.error("Process failed", extra={"error": str(e)})

Logs include:

Best Practices

Code Quality

Data Management

Performance

Troubleshooting

Common Issues

API Key Errors:

File Not Found Errors:

Memory Issues:

Network Errors:

Debug Mode

Enable verbose logging by setting log level:

import logging
logging.getLogger().setLevel(logging.DEBUG)

Enhanced Troubleshooting

Configuration Errors:

API Connection Issues:

Content Quality Issues:

Processing Failures:

Contributing

Code Style

Testing

Documentation

License

This project follows the repository’s LICENSE terms.