Job Description
This a Full Remote job, the offer is available from: Indiana (USA)
This is a remote position.
We are looking for an experienced Senior Data Engineer to join a global pharmaceutical client on a long-term project. The role focuses on designing, building, and optimizing scalable, reliable data pipelines and data platforms that support advanced analytics, experimentation, and data-driven decision-making across manufacturing and business domains.
This is a senior-level position requiring deep technical expertise, strong problem-solving skills, and the ability to collaborate effectively with cross-functional teams in a regulated environment.
- Key Responsibilities
- Design, build, and maintain scalable and reliable data pipelines using modern data engineering best practices
- Develop and optimize ETL/ELT pipelines for structured and semi-structured data
- Work extensively with SQL and Python or Scala to process and transform large datasets
- Build and manage data solutions on AWS cloud infrastructure
- Leverage big data technologies such as Spark, Kafka, and Hadoop
- Design and maintain data models and data warehousing solutions
- Orchestrate data workflows using tools such as Apache Airflow
- Ensure data integrity, quality, and accuracy through validation, cleaning, and preprocessing techniques
- Optimize data collection, processing, and storage for performance, scalability, and cost efficiency
- Design and conduct experiments to test hypotheses and validate data-driven solutions
- Collaborate with analytics, data science, engineering, and business stakeholders to understand data needs and align solutions with organizational goals
- Create templates, dashboards, and visualizations to communicate insights clearly to stakeholders
- Apply and support data governance, security, and compliance standards, especially in a regulated pharmaceutical environment
- Requirements
- 10+ years of experience in Data Engineering or similar roles
- Strong expertise in SQL and Python or Scala
- Hands-on experience with AWS (profiles with strong AWS exposure will be prioritized)
- Solid experience with Spark, Kafka, and Hadoop
- Proven experience designing and implementing ETL/ELT pipelines
- Strong understanding of data modeling, data warehousing concepts, and analytics use cases
- Experience with workflow orchestration tools such as Apache Airflow
- Strong analytical thinking and problem-solving skills
- Excellent communication and collaboration skills
- Ability to work independently in a remote, distributed team
- Nice to Have
- Experience in the pharmaceutical or life sciences industry (top priority)
- Knowledge of manufacturing processes and manufacturing data
- Experience working in regulated environments (GxP, compliance-driven projects)
- Benefits
- Project Duration: 10 months+
- Work Model: Remote (US time zone collaboration required)
- Location: Open to candidates based in US or Europe
- Contract Type: B2B / Contract
This offer from "futureproof consulting" has been enriched by Jobgether.com and got a 74% flex score.
Apply tot his job
Apply To this Job