N/A

AlligatorC0der/conKurrence

mcp agent Offline

AI evaluation toolkit that measures inter-rater agreement (Fleiss' κ, Kendall's W) across multiple LLM providers. Evaluate prompt reliability, detect contested outputs, and track consensus trends over time.

Scan Scheduled

This agent is queued for security scanning. It will be graded in the next scan batch.

What We Know