Pricing Data Scientist in London
Harnham is looking for an Pricing Data Scientist in London
Job description
A global marketing-data organisation is upgrading the engine that matches millions of survey invitations to the right respondents. Your task: treat the matching pipeline as a full-scale optimisation problem and raise both accuracy and yield.
Responsibilities
- Model optimisation - refactor and improve existing matching/segmentation models; design objective functions that balance cost, speed and data quality.
- Experimentation - set up offline metrics and online A/B tests; analyse uplift and iterate quickly.
- Production delivery - build scalable pipelines in AWS SageMaker (moving to Azure ML); containerise code and hook into CI/CD.
- Monitoring & tuning - track drift, response quality and spend; implement automated retraining triggers.
- Collaboration - work with Data Engineering, Product and Ops teams to translate business constraints into mathematical formulations.
Tech stack
Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow)
SQL (Redshift, Snowflake or similar)
AWS SageMaker → Azure ML migration, with Docker, Git, Terraform, Airflow / ADF
Optional extras: Spark, Databricks, Kubernetes.
What you'll bring
- 3-5+ years building optimisation or recommendation systems at scale.
- Strong grasp of mathematical optimisation (e.g., linear/integer programming, meta-heuristics) as well as ML.
- Hands-on cloud ML experience (AWS or Azure).
- Proven track record turning prototypes into reliable production services.
- Clear communication and documentation habits.
Desired Skills and Experience
Experience & skills checklist
3-5 + yrs optimisation/recommender work at production scale (dynamic pricing, yield, marketplace matching).
Mathematical optimisation know-how - LP/MIP, heuristics, constraint tuning, objective-function design.
Python toolbox: pandas, NumPy, scikit-learn, PyTorch/TensorFlow; clean, tested code.
Cloud ML: hands-on with AWS SageMaker plus exposure to Azure ML; Docker, Git, CI/CD, Terraform.
SQL mastery for heavy-duty data wrangling and feature engineering.
Experimentation chops - offline metrics, online A/B test design, uplift analysis.
Production mindset: containerise models, deploy via Airflow/ADF, monitor drift, automate retraining.
Soft skills: clear comms, concise docs, and a collaborative approach with DS, Eng & Product.
Bonus extras: Spark/Databricks, Kubernetes, big-data panel or ad-tech experience.
Extra information
- Status
- Open
- Education Level
- Secondary School
- Location
- London
- Type of Contract
- Part-time jobs
- Published at
- 09-07-2025
- Full UK/EU driving license preferred
- No
- Car Preferred
- No
- Must be eligible to work in the EU
- No
- Cover Letter Required
- No
- Languages
- English
Get similar vacancies sent to your mailbox
Fill in below which area you are searching in for a similar function and don't forget your e-mail address!