Understanding Metrics
Understanding How Metrics Are Calculated
This guide provides a detailed explanation of how One Horizon calculates each metric. Understanding these calculations helps you interpret the data correctly and make informed decisions based on the insights.
Service Architecture
One Horizon Insights uses a modern gRPC-based service architecture with five core services:
Core Services
- InsightsService.FetchTeamInsights: Aggregated team-level analytics and health indicators
- InsightsService.FetchUserInsights: Individual developer metrics and personal productivity data
- InsightsService.FetchVelocityTrend: Configurable time-series data with multiple aggregation levels
- InsightsService.FetchLanguageDistribution: Programming language usage analysis with trend data
- InsightsService.FetchTopContributors: Ranked contributor metrics with flexible ranking criteria
Data Model Foundation
All services use consistent protobuf message definitions:
// Core building blocksmessage UserContribution {string userId = 1;int32 taskCount = 2;float score = 3;float percentage = 4;}message WeightedMetric {int32 count = 1;float weightedScore = 2;float averageComplexity = 3;repeated UserContribution contributions = 4;}
Data Sources & Requirements
What Gets Measured
One Horizon analyzes completed work from your integrated tools:
- Version Control: Commits and pull requests from GitHub, GitLab, or similar
- Project Management: Completed tasks from Jira, Linear, Asana, or similar tools
- Calendar Data: Meeting information for collaboration analysis
- Code Analysis: Programming languages and lines of code changed
Task Filtering Criteria
Not all tasks are included in insights. We focus on day-scope tasks that represent meaningful work:
- Status: Only completed tasks are counted
- Scope: Only day-scope tasks (excludes quick fixes and long-term epics)
- Visibility: Only team-visible tasks (excludes private/personal tasks)
- Time Period: Tasks completed within the selected date range
This filtering ensures metrics reflect substantial, team-relevant work rather than administrative overhead.
Complexity Weighting System
The Problem with Task Counting
Simply counting tasks gives misleading productivity metrics. A five-minute documentation update shouldn't count the same as a week-long architectural redesign.
Our Solution: Weighted Scoring
We assign complexity weights based on task difficulty:
Low Complexity Tasks = 1.0 pointsHigh Complexity Tasks = 3.0 pointsUnknown Complexity = 1.5 points (default when not specified)
Weighted Score Calculation
Team Weighted Score = Σ(task_complexity_weight)Average Complexity = Total Weighted Score ÷ Task Count
Example: A team completes 5 low-complexity tasks (5 × 1.0 = 5.0) and 2 high-complexity tasks (2 × 3.0 = 6.0) for a weighted score of 11.0 and average complexity of 1.57.
Time Period Handling
Predefined Time Periods
The service supports standardized time periods via the
enum TimePeriod {TIME_PERIOD_WEEK = 1;TIME_PERIOD_LAST_30_DAYS = 2;TIME_PERIOD_MONTH = 3;TIME_PERIOD_QUARTER = 4;TIME_PERIOD_HALF_YEAR = 5;TIME_PERIOD_YEAR = 6;}
Aggregation Levels
For velocity trends, data can be aggregated at different levels:
enum AggregationLevel {AGGREGATION_LEVEL_DAILY = 1;AGGREGATION_LEVEL_WEEKLY = 2;AGGREGATION_LEVEL_MONTHLY = 3;}
Date Range Handling
Inclusive End Dates (Most Queries)
For single time range queries (team summaries, user insights):
Start: 2024-01-01 00:00:00End: 2024-01-31 23:59:59 (includes January 31st)
Exclusive End Dates (Velocity Trends)
For consecutive time periods to prevent overlap:
Period 1: [2024-01-01, 2024-01-08) - includes Jan 1-7Period 2: [2024-01-08, 2024-01-15) - includes Jan 8-14Period 3: [2024-01-15, 2024-01-22) - includes Jan 15-21
This ensures tasks completed at exactly the boundary time (e.g., 2024-01-08 00:00:00) are counted only once.
Work Type Classification
Label-Based Categorization
The service uses sophisticated label detection to automatically categorize work:
Work Type Classification
message WorkTypeMetric {// Individual work type counts and percentagesint32 bugFixCount = 1;int32 featureCount = 2;int32 refactorCount = 3;int32 frontendCount = 4;int32 backendCount = 5;int32 infrastructureCount = 6;int32 documentationCount = 7;// Business category groupingsint32 featureWorkCount = 17; // New functionalityint32 maintenanceWorkCount = 18; // Bug fixes + refactoring// Technical stack groupingsint32 frontendWorkCount = 21;int32 backendWorkCount = 22;int32 infraWorkCount = 23;int32 docsWorkCount = 24;}
Programming Language Detection
message LanguageMetric {string language = 1;int32 linesChanged = 2;float percentage = 3;int32 taskCount = 4;repeated UserContribution contributions = 5;}
Project/Topic Classification
message TopicMetric {string topic = 1;int32 taskCount = 2;float weightedScore = 3;float percentage = 4;repeated UserContribution contributions = 5;}
Calculation Examples
Language Distribution
JavaScript: 1,200 lines changed (60%)Python: 600 lines changed (30%)Go: 200 lines changed (10%)Total: 2,000 lines changed
Work Type Distribution
Feature Work: 8 tasks with total weighted score 15.0 (75%)Maintenance Work: 3 tasks with total weighted score 5.0 (25%)Total weighted score: 20.0
Team Health Calculations
Work-Life Balance Score (0-1, higher is better)
Measures sustainable work patterns by analyzing task completion times:
Outside Hours Count = tasks completed:- After 7 PM local time- Before 7 AM local time- On weekends (Saturday/Sunday)Work-Life Balance = 1.0 - (Outside Hours Count ÷ Total Tasks)
Interpretation:
- 0.8-1.0: Excellent work-life balance
- 0.6-0.8: Good balance with occasional off-hours work
- 0.4-0.6: Concerning pattern, worth investigating
- 0.0-0.4: High risk of burnout
Collaboration Index (0-1, higher indicates more collaboration)
Assumes complex tasks require more collaboration:
Collaboration Index = High Complexity Tasks ÷ Total Tasks
Reasoning: High-complexity work typically involves:
- Architecture discussions
- Code reviews with multiple people
- Cross-team coordination
- Knowledge sharing and mentoring
Knowledge Risk Assessment
message KnowledgeRisk {string topic = 1;int32 contributorCount = 2;float riskScore = 3; // 0-1, higher = more riskrepeated string primaryContributors = 4;}
Identifies areas where knowledge is concentrated in too few people:
Risk Score = 1.0 ÷ Number of Contributors in AreaCapped at maximum of 1.0 for single-person areas
Example:
- Frontend Team: 4 contributors → Risk Score = 0.25 (low risk)
- Database Team: 1 contributor → Risk Score = 1.0 (high risk)
Velocity Trend Health
message TeamHealth {float velocityTrend = 4; // Positive/negative trend percentage}
Compares recent performance to identify acceleration or deceleration:
Midpoint = Start Date + (End Date - Start Date) ÷ 2First Half Count = tasks completed before midpointSecond Half Count = tasks completed after midpointVelocity Trend = (Second Half - First Half) ÷ First Half
Interpretation:
- Positive values: Team is accelerating
- Negative values: Team may be slowing down
- Near zero: Consistent, sustainable pace
Velocity Trend Analysis
VelocityDataPoint Structure
message VelocityDataPoint {string period = 1;int32 taskCount = 2;float weightedScore = 3;float averageComplexity = 4;int32 lowComplexityCount = 5;int32 highComplexityCount = 6;int32 unknownComplexityCount = 7;repeated UserContribution contributions = 8;}
Service Request Options
message FetchVelocityTrendRequest {string workspaceId = 1;repeated string userIds = 2; // Empty for workspace-widestring teamId = 3; // Optional team filterDateRange dateRange = 4;AggregationLevel aggregation = 6;bool splitByComplexity = 7; // Include complexity breakdowns}
Individual-Specific Calculations
Focus Score
message UserInsights {float focusScore = 8; // 0-1, higher indicates more focus}
Measures how concentrated someone's work is across projects:
Focus Score = (Highest Project Weighted Score ÷ Total Personal Weighted Score)
Example: If someone's total weighted score is 20.0 and their top project accounts for 15.0, their focus score is 0.75 (highly focused).
Primary Language
message UserInsights {string primaryLanguage = 7; // Language with most lines changed}
The programming language with the most lines of code changed during the period.
Contribution Percentage Calculations
Team Member Contributions
For any metric (language, topic, work type), individual contributions are calculated as:
User Percentage = (User's Metric Value ÷ Total Team Metric Value) × 100
Stacked Chart Data
Contribution charts show each team member's percentage of the total for each category, ensuring:
- All percentages sum to 100% for each category
- Visual representation accurately reflects individual impact
- Consistent color coding across all visualizations
Service Response Patterns
Team Insights Response
message FetchTeamInsightsResponse {TeamInsights insights = 1;TeamHealth healthIndicators = 2;repeated ContributorMetric topContributors = 3;}
User Insights Response
message FetchUserInsightsResponse {UserInsights insights = 1;}
Language Distribution Response
message FetchLanguageDistributionResponse {repeated LanguageMetric languages = 1;map<string, float> trendByLanguage = 2; // Optional trend analysis}
Chart Visualization Mappings
Activity Heatmap
Uses
- Color Intensity: Based on per dayweightedScore
- Tooltip Data: Shows andtaskCountaverageComplexity
- Date Range: Automatically calculates week boundaries for display
Contribution Charts
Transform service data into stacked bar chart format:
- Categories: Work types, languages, or topics
- Series: One per team member with consistent colors
- Values: Percentage contributions from messagesUserContribution
Data Quality & Limitations
What We Can Measure Accurately
- Completed tasks: Clear completion timestamps and complexity markers
- Code changes: Commit data with language detection
- Team composition: User roles and team membership
- Work categorization: Automated label detection and classification
What Requires Interpretation
- Work quality: We measure completion, not code quality or user impact
- Collaboration depth: Complex task count doesn't reflect collaboration quality
- Individual context: Personal circumstances, learning periods, or special projects
Potential Data Gaps
- Manual tasks: Work done outside tracked systems
- Thinking time: Architecture planning, problem-solving sessions
- Helping others: Informal mentoring or untracked collaboration
- Context switching: Impact of interruptions on productivity
Best Practices for Interpretation
Focus on Trends, Not Snapshots
- Weekly variations are normal and expected
- Monthly trends provide better insights into patterns
- Quarterly comparisons show meaningful progress or concerns
Consider Context
- Project phases: Planning periods vs. implementation sprints
- Team changes: New hires, departures, or role changes
- External factors: Holidays, conferences, or organizational changes
Use Multiple Metrics Together
- High velocity + low complexity: May indicate routine work
- Low velocity + high complexity: Likely tackling difficult problems
- Balanced metrics: Suggests healthy, sustainable productivity
Questions About Calculations?
Common Clarifications
Q: Why do some tasks not appear in metrics? A: Only day-scope, team-visible, completed tasks are included to focus on meaningful work.
Q: How accurate is the complexity weighting? A: It's based on task labels and provides relative comparison. Individual tasks may vary, but patterns are generally accurate.
Q: What if someone works on unlabeled tasks? A: Unlabeled work gets "unknown" complexity (1.5 weight) and appears in "other" categories.
Q: How do you handle timezone differences? A: Work-life balance calculations use local time based on user preferences.
Q: How are team contributions calculated? A: Each person's percentage is their individual score divided by the team total for that category.
Still have questions about specific calculations?
Back to Team Insights → Back to Individual Insights →These calculations are designed to be transparent and auditable. If you notice unexpected results, check task labeling, completion timestamps, and scope settings in your project management tools.