Job Description
Data Engineer (34 Years Experience) Location: United States (Remote / On-site based on client needs) Employment Type: Full-time (Contract or Contract-to-Hire) Experience Level: Mid-level (34 years) Company: Aaratech Inc Eligibility: Open to U.S. Citizens and Green Card holders only. We do not offer visa sponsorship. About Aaratech Inc Aaratech Inc is a specialized IT consulting and staffing company that places elite engineering talent into high-impact roles at leading U.S. organizations. We focus on modern technologies across cloud, data, and software disciplines. Our client engagements offer long-term stability, competitive compensation, and the opportunity to work on cutting-edge data projects. Position Overview We are seeking a Data Engineer with 34 years of experience to join a client-facing role focused on building and maintaining scalable data pipelines, robust data models, and modern data warehousing solutions. You'll work with a variety of tools and frameworks, including Apache Spark, Snowflake, and Python, to deliver clean, reliable, and timely data for advanced analytics and reporting. Key Responsibilities β’ Design and develop scalable Data Pipelines to support batch and real-time processing β’ Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt β’ Develop and optimize queries using SQL for data analysis and warehousing β’ Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery β’ Collaborate with business and technical teams to gather requirements and create accurate Data Models β’ Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation β’ Ensure end-to-end Data Processing integrity, scalability, and performance β’ Follow best practices for data governance, security, and compliance Required Skills & Experience β’ 34 years of experience in Data Engineering or a similar role β’ Strong proficiency in SQL and Python (Programming Language) β’ Experience with Extract, Transform, Load (ETL) frameworks and building data pipelines β’ Solid understanding of Data Warehousing concepts and architecture β’ Hands-on experience with Snowflake, Apache Spark, or similar big data technologies β’ Proven experience in Data Modeling and data schema design β’ Exposure to Data Processing frameworks and performance optimization techniques β’ Familiarity with cloud platforms like AWS, GCP, or Azure Nice to Have β’ Experience with streaming data pipelines (e.g., Kafka, Kinesis) β’ Exposure to CI/CD practices in data development β’ Prior work in consulting or multi-client environments β’ Understanding of data quality frameworks and monitoring strategies Apply tot his job