About the Role
· Design, develop, and implement high-performance data pipelines for ingesting, processing, and transforming diverse data sources.
· Architect and maintain scalable data warehouse and lake solutions, ensuring data accessibility and integrity.
· Automate data workflows through scripting and orchestration tools (e.g., Airflow, Luigi, SSIS, Alteryx).
· Implement data quality checks and monitoring systems to guarantee data accuracy and consistency.
· Collaborate with data analysts and scientists to understand data requirements and optimize data pipelines.
· Stay ahead of the curve by exploring and implementing new data technologies and best practices.
· Mentor and guide junior data engineers, fostering a culture of continuous learning and excellence
Requirements
3+ years of experience as a data engineer or in a related role.
· Expert knowledge of data warehousing and lake technologies (e.g., Spark, Hadoop, Snowflake).
· Proficiency in programming languages and scripting tools (e.g., Python, Java, Scala, SQL).
· Strong understanding of data modeling and ETL/ELT processes.
· Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
· Excellent communication and problem-solving skills.
· Ability to work independently and as part of a team.
· Beyond these technical skills, we seek a candidate who embodies:
· A passion for data and its transformative power.
· A meticulous approach to detail and a commitment to quality.
· A proactive mindset and a desire to continuously learn and adapt.
· The ability to navigate complex technical challenges with resourcefulness and ingenuity.
About the Company
A leader in GTM solutions