Job Description
Who are we
Fido empowers millions across Africa to take control of their finances with ease. As a leader in cutting-edge financial technology, Fido clears the way for building credit, securing instant loans, making smart investments, and obtaining tailored insurance. No banker’s hours, no hidden fees—just endless opportunities.
From city centers to rural communities, Fido is breaking barriers and creating financial freedom, providing access to innovative tools and services that foster growth and empowerment. By leveraging advanced technology, Fido is shaping a future of opportunity and financial inclusion across the continent.
Join the team and be a part of leading this transformative change, driving impact where it matters most.
What you will do
Build data pipelines that collect and transform data to support ML models, analysis and reporting.
Work in a high volume production environment making data standardized and reusable, from architecture to production.
Work with off-the-shelf tools including DynamoDb, SQS, S3, RedShift, Snowflake, Mysql but often push them past their limits.
Work with an international multidisciplinary team of data engineers, data scientists and data analysts.
Who you are:
At least 3+ years of experience in data engineering / software engineering in the big data domain.
At least 3+ years of coding experience with Python or equivalent.
SQL expertise, working with various databases (relational and NoSQL), data warehouses, external data sources and AWS cloud services.
Experience in building and optimizing data pipelines, architecture and data sets.
Familiarity with data engineering tech stack – ETL tools, orchestration tools, micro-services, K8, lambdas.
End to end experience – owning features from an idea stage, through design, architecture, coding, integration and deployment stages.
Experience working with cloud services such as AWS, Azure, Google Cloud.
B.Sc. in computer science or equivalent STEM.