Guides
Using with Codex CLI

Using Hydra with Codex

Integrate Hydra workflows with OpenAI Codex for automated code generation pipelines.

Overview

Codex uses the function calling pattern to execute Hydra workflows. Each step translates to a function call that Codex can execute.

Setup

1. Deploy Your Workflow

  1. Create a workflow in the Hydra Dashboard (opens in a new tab)
  2. Click Deploy
  3. Select Codex as the target adapter
  4. Note your deployment ID

2. Get API Credentials

  1. Go to Settings → API Keys in the Hydra Dashboard
  2. Generate a new API key
  3. Save it securely

3. Fetch Workflow Functions

import requests
 
# Use the appropriate API base URL for your environment
# Dev: https://zl5lywlc5d.execute-api.us-west-2.amazonaws.com/v1
# Beta: https://u3srgjr1uf.execute-api.us-west-2.amazonaws.com/v1
# Prod: https://1k2cc1sqa7.execute-api.us-west-2.amazonaws.com/v1
HYDRA_API = "YOUR_API_BASE_URL"
API_KEY = "your_api_key"
 
response = requests.get(
    f"{HYDRA_API}/deployments/{deployment_id}/prompts/codex",
    headers={"Authorization": f"Bearer {API_KEY}"}
)
 
workflow_functions = response.json()["data"]["functions"]

Using with OpenAI API

Basic Usage

import openai
 
openai.api_key = "your_openai_key"
 
# Fetch Hydra workflow functions
hydra_functions = get_hydra_functions(deployment_id)
 
response = openai.ChatCompletion.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a code assistant using Hydra workflows."},
        {"role": "user", "content": "Run the Bug Fix workflow on auth/login.ts"}
    ],
    functions=hydra_functions,
    function_call="auto"
)
 
# Handle function calls
if response.choices[0].message.get("function_call"):
    function_name = response.choices[0].message["function_call"]["name"]
    arguments = json.loads(response.choices[0].message["function_call"]["arguments"])
 
    # Execute the Hydra function
    result = execute_hydra_function(function_name, arguments)

Complete Example

import openai
import requests
import json
 
class HydraCodex:
    def __init__(self, hydra_api_key: str, openai_api_key: str, api_base_url: str):
        self.hydra_api_key = hydra_api_key
        self.api_base_url = api_base_url
        openai.api_key = openai_api_key
 
    def get_workflow_functions(self, deployment_id: str) -> list:
        """Fetch function definitions from Hydra"""
        # Use the appropriate API base URL for your environment
        response = requests.get(
            f"{self.api_base_url}/deployments/{deployment_id}/prompts/codex",
            headers={"Authorization": f"Bearer {self.hydra_api_key}"}
        )
        return response.json()["data"]["functions"]
 
    def run_workflow(self, deployment_id: str, user_prompt: str) -> str:
        """Execute a Hydra workflow using Codex"""
        functions = self.get_workflow_functions(deployment_id)
 
        messages = [
            {"role": "system", "content": "Execute Hydra workflow steps in order."},
            {"role": "user", "content": user_prompt}
        ]
 
        while True:
            response = openai.ChatCompletion.create(
                model="gpt-4",
                messages=messages,
                functions=functions,
                function_call="auto"
            )
 
            message = response.choices[0].message
 
            if message.get("function_call"):
                # Execute the function
                result = self.execute_function(
                    message["function_call"]["name"],
                    json.loads(message["function_call"]["arguments"])
                )
 
                # Add result to conversation
                messages.append(message)
                messages.append({
                    "role": "function",
                    "name": message["function_call"]["name"],
                    "content": json.dumps(result)
                })
            else:
                # Workflow complete
                return message["content"]
 
    def execute_function(self, name: str, args: dict) -> dict:
        """Execute a Hydra workflow function"""
        # Implementation depends on your environment
        # Could call local tools, APIs, etc.
        pass
 
# Usage
hydra = HydraCodex(
    hydra_api_key="your_hydra_key",
    openai_api_key="your_openai_key",
    api_base_url="https://API_BASE_URL/v1"  # Use appropriate URL for your environment
)
 
result = hydra.run_workflow(
    deployment_id="dep_123",
    user_prompt="Fix the authentication bug in login.ts"
)

Adapter Configuration

Basic Configuration

{
  "adapters": {
    "codex": {
      "mode": "direct",
      "config": {
        "temperature": 0.2,
        "max_tokens": 2000
      }
    }
  }
}

Configuration Options

OptionTypeDefaultDescription
temperaturenumber0.2Model creativity (0-1)
max_tokensnumber2000Max tokens per response

Action Mappings

How HMS actions translate to Codex functions:

HMS ActionGenerated Function
analyze_codeanalyze_code(files, focus)
edit_fileedit_file(path, changes)
edit_filesedit_files(files, changes)
generate_codegenerate_code(spec, path)
generate_testsgenerate_tests(target, framework)
execute_commandexecute_command(cmd)
search_referencessearch_references(query)
design_architecturedesign_architecture(requirements)
review_and_commitreview_and_commit(message)

Generated Functions Example

For a Bug Fix workflow, Hydra generates:

{
  "functions": [
    {
      "name": "analyze_bug",
      "description": "Analyze code to identify the bug's root cause",
      "parameters": {
        "type": "object",
        "properties": {
          "files": {
            "type": "array",
            "items": {"type": "string"},
            "description": "Files to analyze"
          },
          "focus": {
            "type": "string",
            "description": "What to focus on"
          }
        },
        "required": ["files"]
      }
    },
    {
      "name": "implement_fix",
      "description": "Implement the bug fix",
      "parameters": {
        "type": "object",
        "properties": {
          "file": {
            "type": "string",
            "description": "File to modify"
          },
          "changes": {
            "type": "string",
            "description": "Changes to make"
          }
        },
        "required": ["file", "changes"]
      }
    },
    {
      "name": "write_tests",
      "description": "Write tests to verify the fix",
      "parameters": {
        "type": "object",
        "properties": {
          "test_file": {
            "type": "string",
            "description": "Test file path"
          },
          "coverage_target": {
            "type": "number",
            "description": "Target coverage percentage"
          }
        },
        "required": ["test_file"]
      }
    }
  ]
}

Automation Pipelines

CI/CD Integration

# .github/workflows/hydra-codex.yml
name: Hydra Code Review
 
on:
  pull_request:
    types: [opened, synchronize]
 
jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
 
      - name: Run Hydra Code Review
        env:
          HYDRA_API_KEY: ${{ secrets.HYDRA_API_KEY }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: |
          python scripts/hydra_review.py \
            --deployment ${{ vars.HYDRA_DEPLOYMENT_ID }} \
            --files "$(git diff --name-only origin/main)"

Scheduled Analysis

# Run nightly security scan
import schedule
 
def nightly_scan():
    hydra = HydraCodex(...)
    result = hydra.run_workflow(
        deployment_id="security_scan_deployment",
        user_prompt="Scan the entire codebase for security vulnerabilities"
    )
    send_report(result)
 
schedule.every().day.at("02:00").do(nightly_scan)

Best Practices

1. Use Low Temperature for Code

{
  "config": {
    "temperature": 0.1
  }
}

2. Handle Function Errors

try:
    result = execute_function(name, args)
except HydraError as e:
    # Log error and continue or abort
    messages.append({
        "role": "function",
        "name": name,
        "content": json.dumps({"error": str(e)})
    })

3. Validate Function Outputs

def execute_function(name: str, args: dict) -> dict:
    result = _execute(name, args)
 
    # Validate before returning
    if not validate_output(name, result):
        raise ValidationError(f"Invalid output for {name}")
 
    return result

4. Set Reasonable Token Limits

{
  "config": {
    "max_tokens": 4000  // Increase for complex workflows
  }
}

Troubleshooting

Function Not Found

  • Verify your deployment includes the Codex adapter
  • Check the function name matches exactly
  • Fetch fresh functions from the API

Token Limit Exceeded

  • Increase max_tokens in config
  • Break large files into smaller chunks
  • Use streaming for long operations

Rate Limiting

  • Implement exponential backoff
  • Cache workflow functions
  • Use batch processing for multiple files

Next Steps