Job description
Based on a Big Data architecture that collects data from all Back Offices in a Datalake (Hadoop cluster), the project consists of exploiting this data to feed a community Dataware stored on an Exadata database.
This is achieved by designing new Spark/Scala processes that manipulate the DataLake's refined data with business intelligence rules.
Objectives and deliverables
As part of the Business Intelligence & Big Data project team, the SPARK/SCALA expert will be responsible for the following activities:
- Detailed functional and technical design.
- SPARK / SCALA developments
- Contribution to the formalization of test plans, execution of unit tests and a first level of integration tests.
- Troubleshooting during functional integration and user acceptance phases.
- Pre-production and production packaging and deployment.
- Optimization of processing to guarantee DWG standard quality of service.
- Implementation of industrial and re-entrance solutions to ensure optimum resumption of processing in the event of a production incident.
- Commissioning and deployment support.
- Environment monitoring.
The consultant will work closely with the Project Manager, responsible for the batch, and the Program development team.
Knowledge of Oracle is required
Experience >5 years on SPARK & SCALA technologies is required.
Knowledge of a similar technological context is a plus
Stack
Spark, Scala, Hadoop, Oracle
You are interested in the offer👇