Submit
Data Engineer
Amadora
Job description
The service to be provided will be part of Data integration team focuses on integrating applications within
different business areas of our client, specifically Logistics, Communications, and Sales. The
Data Streaming solutions are based using an event driven architecture and leveraging tools available on
technological landscape like Apache Kafka/Confluent, Snaplogic and Flink.
The Data engineer develop and implement Messaging/Event streaming data pipelines to process, transform
and analyze data in real-time using technologies such as Apache Kafka / Confluent, Flink, Snaplogic and Kong
Connect. It should ensure data quality and consistency by identifying and resolving data integration issues and
continuously monitoring and improving integration processes with the objective to increase efficiency and
effectiveness. Additionally, it’s required to create and maintain documentation for the solutions implemented.
Requirements
- Databases (relation and non-relational), Data Structures (json, xml) and using SQL
- Cloud APIs to automate provisioning, deployment, improving system performance and infrastructure
scaling (eg: terraform, kubernetes)
- Programming skills and implementation experience (e.g.: Python)
- Continuous Integration and Continuous Delivery (CI/CD) based application development, including source
control systems (gitlab)
-
Design and implementation of integration solutions including, Batch and Real-Time Data,
Messaging/Event processing, Cloud, Process Integrations
-
APIs/Connectors like Kong, Confluent Connectors
- AWS services (e.g.: S3, Elastic Computing)
- Junior Level - NoSQL technologies (Mongo, CouchDB, Cassandra, Neo4j)
Want to apply?
Position
Name*
Email*
Phone number*
Country*
City*
Linkedin
Faça upload do seu CV*
(max. 4MB)
Upload your photo or video
(max. 4MB)