Job Description
Title: Staff Data Operations (DataOps) Engineer
Location: 100% Remote
Tiem Zone: CST
Hours: Approximately 50 hours/week
Type: Direct hire
Salary range: 180 – 220K
- Job Description: We're seeking a visionary and very senior-level Staff DataOps Engineer to lead this transformation, pioneering DataOps excellence at a global scale and driving the adoption of next-generation microservices architectures with a relentless focus on automation. We are committed to implementing and evolving leading DataOps practices to ensure the highest levels of data reliability and efficiency. Our platform squads are at the forefront of this evolution, creating self-service products that democratize infrastructure, enabling rapid deployment, scalability, and reliability through sophisticated automation, Continuous Integration and Continuous Delivery, and Data Quality practices. We thrive on tackling complex challenges, leveraging creativity, decisiveness, and adaptability to navigate ambiguity and deliver transformative solutions through automated workflows and intelligent systems. If you're passionate about shaping the future of data and building platforms that empower entire organizations with the power of automation, join us at Lore and be a catalyst for innovation.
- they are looking for the type of person that will be able to lead the team and architecture of multiple projects at once from a strategic standpoint, not just single project work.
- Requirements:
- 8+ years of progressive experience in DataOps engineering, with a proven track record of implementing and optimizing data pipelines and automation frameworks.
- Deep expertise in data warehousing, data lakes, and distributed data processing technologies (e.g., Spark, Hadoop, Kafka).
- Exceptional proficiency in programming languages (e.g., SQL, Python, Java, Scala).
- Prefers native GCP Stack as they are not changing their technologies. Others might be considered (AWS and Azure).
- Expert-level understanding of microservices architecture and containerization technologies (e.g., Docker, Kubernetes).
- Deep understanding and experience with Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation).
- Proven ability to architect and implement highly scalable and resilient data solutions.
- Advanced understanding of data orchestration, monitoring, and observability tools.
- Proven ability to create and implement data automation frameworks.
- Experience with real time data platforms at extreme scale.
- Strong understanding and experience with CI/CD methodologies and tools for data pipelines, including automated testing and deployment.
- Experience with data quality tools, and monitoring systems.
- Experience with DataOps related tooling.
Apply tot his job
Apply To this Job