Lead Data Engineer, Python, Java, AWS

🌍 Remote, USA 🎯 Full-time 🕐 Posted Recently

Job Description

    Job Description:
  • Collaborate with data scientists to ensure data is accurately extracted, transformed, and loaded for analysis and decision-making
  • Effectively collaborate and partner with various Scores stakeholders to deliver data driven solutions that support strategic Scores initiatives
  • Ability to analyze, interpret, and manipulate large data sets to support analytic research and model development efforts
  • Ability to deliver high level results supporting business-critical projects within expected timelines
  • Use internal technologies in the development, maintenance and improvement of tools and processes to help solve challenging business problems in predictive analytics
  • Support our existing code base and the overall analytic SDLC
  • Demonstrate self-initiative and innovation by writing new code to continuously evaluate and improve existing code base
  • Apply advanced data transformation techniques to optimize the processing of large datasets
  • Work closely with the data scientists and other data engineers in constructing the best methodologies in generating new tools, code and datasets based on project requirements
    Requirements:
  • BS degree in Computer Science, Engineering, Information Technology, Management Information Systems (or equivalent work experience)
  • Prior/current experience working with U.S. Credit Bureau data (Preferred)
  • Proven programming skills in Python (Highly Preferred), Java/Groovy, Perl and/or Shell scripting
  • Demonstrated expertise utilizing Linux (RedHat) and Windows operating systems
  • Expertise in AWS services- SageMaker, Jupyter Notebooks, S3, Athena, Python (AWS Certifications are a plus)
  • Proven expertise analyzing large datasets and applying data-cleaning techniques along with strong data wrangling skills and experience working with big data technologies
  • Strong team-player with great communication skills
  • High proficiency with VS Code and notebook interfaces such as Jupyter
  • Adept at translating business problems into executable code
  • Ability to use statistical software and data manipulation tools for purposes of data quality evaluation
  • Highly detailed and demonstrates the ability to execute a given process with meticulous precision
  • Proficient in auditing results with a critical level of accuracy and detail
  • Experience working with big data technology (Spark, Hadoop, etc.)
  • Familiarity with relational databases (Oracle, mySQL, etc)
  • Familiarity with Eclipse IDE
  • Familiar with version control tools like GitHub or Bitbucket.
    Benefits:
  • Highly competitive compensation
  • Benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so
  • An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie

Apply tot his job

Apply To this Job

Ready to Apply?

Don't miss out on this amazing opportunity!

🚀 Apply Now

Similar Jobs

Recent Jobs

You May Also Like