Platform Analytics
Pandora provides comprehensive analytics for platform administrators and organization admins to monitor marketplace health, worker performance, and financial outcomes. This page covers the key metrics available and how to use them effectively.
Worker Tier Distribution
Understanding how workers are distributed across tiers helps assess marketplace health:
| Metric | What It Shows | Healthy Range |
|---|---|---|
| Tier 1 (New) count | Workers still in onboarding/early phase | 20-40% of total |
| Tier 2 (Proven) count | Workers with basic reliability established | 20-30% |
| Tier 3 (Reliable) count | Solid mid-tier workers | 15-25% |
| Tier 4 (Expert) count | High-performing workers | 10-15% |
| Tier 5 (Elite) count | Top performers | 3-8% |
A healthy distribution looks like a pyramid — many workers at the bottom, fewer at the top. If too many workers are clustering at Tier 1 with no advancement, consider adjusting the minimum job requirements or scoring weights.
Job Posting Metrics
Track how the marketplace is being used:
| Metric | Description |
|---|---|
| Total jobs posted | Cumulative Pandora job postings |
| Active postings | Currently open and accepting claims |
| Average time to first claim | How quickly posted jobs attract workers |
| Average time to fill all slots | How long until all slots are claimed |
| Fill rate | Percentage of postings that get fully claimed |
| Cancellation rate | Percentage of postings cancelled before any claim |
| Escalation rate | Percentage of internal-first postings that escalate to external |
Time to Claim Analysis
Time to first claim is one of the most important marketplace health indicators:
| Average Time | Assessment | Action |
|---|---|---|
| Under 1 hour | Excellent — high worker demand | Consider expanding job volume |
| 1-4 hours | Good — healthy marketplace | Monitor, no action needed |
| 4-24 hours | Fair — adequate but room for improvement | Review payout values, worker base |
| Over 24 hours | Concerning — low worker engagement | Investigate: too few workers? Payouts too low? |
Worker Performance Metrics
Aggregate performance data across the worker base:
| Metric | Description | Target |
|---|---|---|
| Average completion rate | Completed jobs / accepted jobs across all workers | Above 90% |
| Average on-time rate | On-time completions / total completions | Above 85% |
| No-show rate | No-shows / total accepted jobs | Below 3% |
| Put-back rate | Put-backs / total accepted jobs | Below 10% |
| Average satisfaction score | Mean org rating across all completions | Above 4.0/5.0 |
| Average response time | Median time from job visible to claim | Under 2 hours |
Score Event Analysis
The PandoraScoreEvent table provides granular data on score changes:
| Event Type | What It Tracks |
|---|---|
| job_complete | Score change from on-time completion |
| job_late | Score change from late completion |
| no_show | Score change from no-show (always negative) |
| putbackontime | Score change from on-time put-back |
| putbacklate | Score change from late put-back |
| rating_received | Score change from a new org rating |
| manual_override | Admin-initiated tier change |
| scheduled_recalc | Daily scheduled recalculation |
Analyzing the distribution of event types helps identify systemic issues. A spike in no-shows, for example, might indicate scheduling problems or payout dissatisfaction.
Financial Metrics
For Organizations
| Metric | Description |
|---|---|
| Total Pandora spend | Sum of all worker payouts |
| Average payout per job | Mean worker payout across completed jobs |
| Blended payout rate | Average tier percentage across all claims |
| Tier savings | Difference between Pandora value and actual payouts (savings from sub-100% tiers) |
| Margin per job | Job value minus worker payout |
For the Platform
| Metric | Description |
|---|---|
| Total marketplace volume | Sum of all Pandora values across postings |
| Gross merchandise value | Sum of all job values (client payments) |
| Worker earnings | Total payouts to workers |
| Active worker count | Workers who completed at least one job in the last 30 days |
| Revenue per active worker | Average earnings per active worker |
Visibility Pipeline Analytics
Track how the visibility pipeline affects job distribution:
| Metric | Description |
|---|---|
| Jobs filtered by tier | How many jobs each tier level cannot see |
| Jobs filtered by lead time | Jobs hidden due to lead time restrictions |
| Jobs filtered by value cap | Jobs hidden due to value caps |
| Jobs filtered by supply tags | Jobs hidden due to supply mismatches |
| Average visible jobs per worker | How many jobs each worker typically sees |
This helps identify if the visibility configuration is too restrictive (workers see too few jobs) or too loose (workers are overwhelmed with irrelevant listings).
Monitoring Dashboards
Real-Time Dashboard
A live view showing:
- Active job postings count
- Pending claims in progress
- Recent completions (last 24 hours)
- Active worker count (currently browsing or on jobs)
- Escalation timer countdowns for internal-first postings
Weekly Summary
An automated report including:
- New workers onboarded
- Tier promotions and demotions
- Total jobs completed
- Total payouts issued
- Top performing workers (by score delta)
- Flagged issues (high no-show rate, escalation failures)
Using Analytics Effectively
Identifying Issues
| Signal | Possible Issue | Investigation |
|---|---|---|
| Dropping fill rate | Payouts too low or worker shortage | Compare payout values to market rates; check active worker count |
| Rising no-show rate | Workers overcommitting | Check average claims per worker; consider adding schedule conflict warnings |
| Slow time to claim | Worker base not large enough or jobs not attractive | Review tier distribution and lead time configuration |
| Clustering at Tier 1 | Advancement too difficult | Review minimum job counts and scoring weights |
| Many escalations | Internal workers not claiming | Check if internal workers are active; consider adjusting escalation timing |
Making Data-Driven Adjustments
- Review analytics weekly — Look for trends, not single data points
- Compare before and after — When you change tier configuration or scoring weights, compare metrics from the week before and after
- Segment by tier — Performance metrics are most useful when broken down by tier level
- Monitor seasonality — Job volume and worker availability may vary by season
- Track cohorts — Follow groups of workers who joined at the same time to understand advancement patterns
On this page