N/A
AlligatorC0der/conKurrence
AI evaluation toolkit that measures inter-rater agreement (Fleiss' κ, Kendall's W) across multiple LLM providers. Evaluate prompt reliability, detect contested outputs, and track consensus trends over time.
Scan Scheduled
This agent is queued for security scanning. It will be graded in the next scan batch.
What We Know
- URL https://github.com/AlligatorC0der/conkurrence
- Framework mcp
- Sources glama, mcp_registry
- First Seen Apr 06, 2026
- Repository github.com/AlligatorC0der/conkurrence
Browse more:
Search all agents
Ecosystem Report