Job Description
The system ingests operational data, computes industrial KPIs, generates structured AI insights, and exposes deterministic APIs for a mobile application.
This role is strictly backend-focused. No frontend work is included.
Backend Architecture
- The platform is built on:
- Django + Django REST Framework
- PostgreSQL with ELT structure: raw to staging to analytics
- Celery + Redis for task orchestration
- Stripe for billing boundary, already scoped separately
- Docker-based deployment
- Core Architectural Principles
- Multi-tenant isolation at organisation and site level
- Deterministic KPI recomputation
- Append-only raw data layer
- Strict schema validation for ingestion
- Versioned KPI logic
- AI outputs must be grounded in stored data
- No autonomous AI actions, advisory only
Backend Responsibilities High-Level
- 1. Data Ingestion Layer
- Build a robust CSV ingestion pipeline
- Implement header validation and schema enforcement
- Ensure idempotent file handling with no duplicate ingestion
- Transform raw data into the canonical ProductionFact model
- Maintain ingestion logs and validation reports
2. Manufacturing Data Model Refinement
- Refactor the ProductionFact schema to support:
- Workcenter context
- SKU and job granularity
- Structured downtime categorisation
- Cost attribution fields
- Additionally:
- Implement canonical master data tables
- Enforce referential integrity
- 3. KPI Engine Industrial-Grade
- Correct OEE computation including availability, performance, and quality
- Implement structured downtime loss logic
- Build reliability metrics foundation using event-based design
- Ensure deterministic recompute capability
- Support time-series aggregation
- 4. Dashboard APIs
- Expose pre-computed KPI endpoints
- Implement cached read APIs
- Support filtering by site, shift, and workcenter
- Enforce entitlement gating
5. AI Insight Layer Backend Only
- Generate and store:
- AI Suggestions
- AI Improvements
- AI Insights
- Additionally:
- Ensure traceability to source data
- Cache AI outputs
- No frontend integration required
6. Task Orchestration
Implement Celery task chains:
validate to transform to ingest to compute KPIs to generate AI
- Also include:
- Scheduled ingestion support
- Idempotent task handling
Phase 3 – Manufacturing Intelligence Expansion
1. Job-Level Margin Foundation Complete Implementation
Data Model Expansion
Extend the schema with a dedicated JobPerformance model. Do not overload ProductionFact.
- The model must include:
- job_id indexed and tenant-scoped
- site_id
- workcenter_id
- sku_id
- quoted_revenue
- quoted_material_cost
- quoted_labour_cost
- quoted_overhead_cost
- actual_material_cost
- actual_labour_cost
- allocated_overhead_cost
- downtime_cost
- scrap_cost
- revenue_recognised
- job_status
- job_start_date
- job_end_date
All monetary fields must use Decimal with currency support.
Margin Calculations Deterministic
Implement:
Actual Margin equals revenue_recognised minus actual_material plus actual_labour plus allocated_overhead plus downtime_cost plus scrap_cost.
Quoted Margin equals quoted_revenue minus quoted_material plus quoted_labour plus quoted_overhead.
Margin Variance percentage equals Actual minus Quoted divided by Quoted.
- Margin Erosion Attribution must break down percentage erosion into:
- Scrap contribution
- Downtime contribution
- Labour overrun
- Material price variance
All formulas must be versioned and logged.
- --
Margin APIs
- Build:
- api margin job job_id
- api margin site site_id
- api margin summary
- Responses must include:
- Margin values
- Variance percentage
- Erosion breakdown
- Financial impact
- Data lineage metadata
All results must be cacheable and recomputable.
2. Cost Attribution Logic Production-Grade
Deterministic Cost Model
Implement a cost engine with:
Material per good unit equals actual_material_cost divided by good_units.
Labour per runtime hour equals actual_labour_cost divided by runtime_hours.
- Overhead allocation must support configurable methods:
- Per shift
- Per runtime hour
- Per job
A configuration table must define the allocation rule per tenant.
KPI Endpoints
- Build:
- api kpi cost-per-unit
- api kpi cost-variance
- api kpi unit-economics
- All endpoints must support filtering by:
- site
- workcenter
- sku
- job
- time range
All responses must include formula version and input data range.
3. Cross-Site Normalised Benchmarking Internal
Normalisation Rules
- Standardise:
- OEE time-weighted
- Scrap percentage
- Cost per unit
- Ensure:
- Comparable time ranges
- Comparable shift hours
- Currency normalisation
Percentile Logic
- For each KPI:
- Compute distribution across sites
- Assign percentile rank
- Flag top performer
- Flag bottom performer
- Flag above or below median
Store benchmarking snapshots for reproducibility.
Benchmark APIs
- Build:
- api benchmark kpi kpi_name
- api benchmark site site_id
- Responses must return:
- Rank
- Percentile
- Group average
- Variance from average
- Financial delta if site matched top quartile
4. Economic Impact Layer Mandatory
- Every KPI endpoint must optionally include:
- Economic impact value
- Impact calculation logic
- Time range used
Examples:
Scrap impact equals scrap_units multiplied by material_cost_per_unit.
Downtime impact equals downtime_minutes multiplied by cost_per_minute.
OEE delta impact equals lost throughput multiplied by contribution margin.
Impact values must be stored in the analytics layer for audit.
Add an economic_impact object in API responses.
5. AI Grounding and Traceability Production-Ready
- Every AI output must store:
- ai_output_id
- organisation_id
- related_kpi_id
- source_table_names
- source_record_ids
- time_range
- kpi_version
- prompt_snapshot
- structured_input_data_snapshot
- model_name
- generation_timestamp
No AI output may exist without lineage.
Audit Endpoint
- Build:
- api ai audit ai_output_id
- Return:
- Full citation trail
- KPI inputs used
- Raw data reference
- Formula version
- Economic impact linkage
This ensures defensibility under regulatory scrutiny.
6. Industrial Readiness and Maturity Scoring
- Implement a scoring engine with inputs:
- Percentage data completeness
- KPI coverage ratio
- Margin model activation
- Benchmarking availability
- Historical depth of data
- Output:
- 0 to 100 maturity score
- Tier classification: Foundational, Structured, Optimised
- Expose:
- api readiness organisation
Score must be recomputable and transparent.
Phase 3 Outcome
- After completion, Exec App will provide:
- True job-level economic diagnostics
- Deterministic cost engine
- Internal benchmarking
- Financial impact visibility
- Audit-ready AI outputs
- Organisational maturity scoring
- Documentation and Validation
- Postman collection
- API documentation
- Proof of idempotency
- Migration discipline with no schema corruption
- Clean README with setup steps
- What Is Not Included
- React Native frontend
- Mobile UI
- Website or marketing pages
- App store deployment
- DevOps infrastructure build-out, Docker assumed
- Required Experience
- Django + DRF at production level
- PostgreSQL schema design
- Celery + Redis
- Multi-tenant SaaS backend architecture
- Clean migration management
- API design discipline
Timeline and Budget
Timeline: 4 to 6 weeks preferred, milestone-based delivery.
Total Budget: 300 dollars. No negotiation. More work to follow.
Apply tot his job
Apply To this Job