The client is a global asset, investment, and financial product management company operating across the United States and in over 40 other countries.
DataArt engages with the customer by working with a different team and areas across Data Engineering, Cloud Engineering, and Data Architecture.
The client has recently started the development of a system that appeared to be in need of enhancements before its final implementation. The system is a warehouse consolidating data flows, and processing information so that it can be presented to the end user in a readable format.
Along with the optimization, the project is aimed at migration from on-premise software to cloud services (AWS). We are the only vendor working on this task, and we need to expand the team.
The project team consists of Data Engineers, DevOps Specialist, Front-End Developer, Full-stack Developer, Data Scientists, Business Analyst.
We are seeking a Data Engineer. This is a transformational position that will play a critical role in the evolution and growth of the Client business and is directly aligned with the firm’s commitment to proactively and innovatively meet challenges and seize opportunities by effectively leveraging data, analytics, and technology.
The Data Engineer will be responsible for ingesting, transforming, and validating data into the data lake. This individual will work closely with the Head of the Architecture team. The Data Engineer will evaluate and implement tools needed for data governance, architecture, and quality.
Amazon Web Services, ETL and ELT, Kafka, Spark, Pentaho.
• Collect, blend, and transform data using ETL tools, data management system tools, and code development (e.g. SQL, Python)
• Perform aggregations on data across various warehousing models for data analytics and reporting purposes
• Interact with data analytics and reporting teams to understand how data needs to be structured for consumption
• Minimum of 2 years total work experience in a data engineering role
• Extensive hands-on experience in modern data lake architecture, database development, and data modeling
• Strong implementation skills and working knowledge of data structures, algorithms, and Big Data tools (Spark, Hadoop, Python, SQL, NoSQL, Hive). Must have hands-on experience using Spark
• Experience working in an agile environment embracing collaboration within and across teams
• Excellent written and verbal communication skills
• Detail oriented
• Analytical with strong problem-solving abilities
• Professional and energetic self-starter
• Comfortable with ambiguity, able to effectively take the conceptual to the pragmatic
• Spoken English
What we offer
— Experienced colleagues who are ready to share knowledge;
— The ability to switch projects, technology stacks, and try yourself in different roles;
— More than 150 workplaces for advanced training;
— Study and practise English: courses and communication with colleagues and clients from different countries;
— Support of speakers who make presentations at conferences and meetings of technology communities;
• Health insurance;
• The ability to focus on your work: a lack of bureaucracy and micromanagement, and convenient corporate services;
• Friendly atmosphere, concern for the comfort of specialists, contemporary office space;
• Flexible schedule (there are core mandatory hours), the ability to work remotely upon agreement with colleagues.