Senior Data Engineer
MUST HAVE SNOWFLAKE, AWS,
Salary - £70-80k with 15% bonus
Hybrid working – couple of days in the office
City of London
We are looking for:
- Good understanding of data engineering principles
- A good technical grasp of Snowflake and automating it, and transforming complex datasets
- AWS Skillset
- Delivery experience
- Building solutions in snowflake
- implementing data warehousing solutions using Snowflake and AWS
- Hands-on experience with AWS services such as Glue (Spark), Lambda, Step Functions, ECS, Redshift, and SageMaker.
- Enthusiasm for cross-functional work and adaptability beyond traditional data engineering.
- Examples like building APIs, integrating with microservices, or contributing to backend systems — not just data pipelines or data modelling.
- Mention on tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform.
MUST HAVE SNOWFLAKE, AWS
Key Responsibilities:
- Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step Functions.
- Lead the development of robust data pipelines and analytics platforms, ensuring high availability, performance, and maintainability.
- Demonstrate proficiency in software engineering principles, contributing to the development of reusable libraries, APIs, and infrastructure-as-code components that support the broader data and analytics ecosystem.
- Contribute to the evolution of the team’s data engineering standards and best practices, including documentation, testing, and architectural decisions.
- Develop and maintain data models and data marts that support self-service analytics and enterprise reporting.
- Drive automation and CI/CD practices for data workflows, ensuring reliable deployment and monitoring of data infrastructure.
- Ensure data quality, security, and compliance with internal policies and external regulations.
- Continuously optimize data processing workflows for performance and cost, using observability tools and performance metrics.
- Collaborate cross-functionally with DevOps, analytical engineers, data analysts, and business stakeholders to align data solutions with product and business goals.
- Mentor and support team members through code reviews, pair programming, and knowledge sharing, fostering a culture of continuous learning and engineering excellence.
Skills and Experience:
- Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred)
- Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway
- Solid foundation in software engineering principles, including version control (Git), testing, CI/CD, modular design, and clean code practices. Experience developing reusable components and APIs is a strong plus.
- Advanced SQL skills for complex data queries and transformations
- Proficiency in at least one programming language, with Python strongly preferred for data processing, automation, and pipeline development
- AWS or Snowflake certifications are a plus
- Hands-on experience with AWS services such as Glue (Spark), Lambda, Step Functions, ECS, Redshift, and SageMaker.
- Enthusiasm for cross-functional work and adaptability beyond traditional data engineering.
- Examples like building APIs, integrating with microservices, or contributing to backend systems — not just data pipelines or data modelling.
- Mentions on tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform. Personal contribution and not handled by DevOps for candidate's projects
-
-