🔍We are looking for a talented Data Engineer.
- Design and develop scalable and efficient data pipelines performing extraction, filtering, cleansing, transformation of data from various sources (flat files, databases, external API-s).
- Extend, monitor and maintain existing data pipelines implementing new features, bug fixes, performance optimizations and others.
- Design and implement new data platform models (data lake or data warehouse).
- Improve, monitor and maintain existing data models performing and automating data quality checks, performance tuning.
- Collaborate with stakeholders on requirements evaluation, research of data sources’ documentation and data exploration.
- Advocate and adhere to the highest industry standards and practices for coding, data modelling and security.
- Deploy and maintain cloud infrastructure assets and services to perform the data engineering operations.
- Create CI/CD processes, automated testing and repetitive processes.
- Run ad-hock queries, scripts and other activities based on emerging business or technical needs.
- Contribute to Amplify Analytix culture of mutual help, support and knowledge sharing.
💯 Skills & Experience:
- A degree in Mathematics, Physics, Engineering, Computer Science or another quantitative subject.
- 3+ years hands-on experience in complex scalable ETL design, implementation and maintenance, workflow management, performance tuning, automated monitoring and testing.
- 3+ years hands-on experience with data modeling in relational databases or data lakes.
- 1+ years of experience with data solutions in the cloud (AWS or Azure) using Databricks, Azure Data Factory, AWS Glue, Amazon RedShift, Azure Synapse Analytics or similar.
- Track of record of projects in international environment.
- Hands-on experience in extracting data from RESTful API-s, streaming sources, databases and file stores.
- Extensive knowledge in SQL including user functions, stored procedures, query optimizations, performance tuning and indexing.
- Experience with writing code in object-oriented language (ideally Python) and good understanding of OOP principles.
- Experience with command line and UNIX / Linux environments.
- Good understanding of data processes security practices.
- Knowledge and experience with version control (ideally Git), git-flow or similar code management approaches.
- Fluent English is a must.
📎 The following will be considered as advantage:
- Experience with Docker.
- Knowledge and experience in libraries like Pandas, NumPy, PySpark.
- Experience with cloud-based security services (Azure Key Vault, Amazon KMS, Amazon Secrets Manager, etc.).
- Experience with Hadoop ecosystem.
- Experience with reporting tools (Power BI, Tableau, etc.).
- Consulting experience.
🤝 What we offer you
- Attractive compensation and benefits’ package.
- A Flexible working model by combining a Hybrid way of work policy (work from home with office collaboration days) and flexible working hours.
- Medical insurance with a dental package.
- Monthly access to mental wellness consultation.
- Multiple career progression opportunities
- Tailor-made training and ongoing development programme to help you enhance your skills with access to certification courses. (Variety of technical and soft skills, plus business training programmes).
- Fun and collaborative working atmosphere with a vast variety of company activities.
- International cross-cultural team environment.
- Diverse cross-functional projects.
- Work side by side with top-notch professionals.
- CSR activities.
- Employee Referral and Recognition programmes.
- Team and family events.
🌏🏳️🌈 At Amplify Analytix we stand for equal opportunities, we respect and foster diversity. All applications will be treated in strict confidentiality, based entirely on qualifications, skills, knowledge and experience, and relevant business requirements.