This post is over 30 days old. The position may no longer be available

Python Developer

, Mumbai · canopy.cloud · Freelance or consulting · Programming

Canopy.cloud Private Wealth Data Platform is a Big Data platform that provides critical functionality for investment bank data analytics-driven understanding, getting to know micro investment strategies and legal / compliance function. 

The main objective of the project is to design and implement critical data sourcing from global upstream systems into local data platform. Project activities will include data discovery, preparing, reviewing and submitting Cross-Data Approval request, establishing formal data governance and data quality processes for data flow, capturing Critical Data Elements and mapping them to the business requirements, establishing connectivity between Source Systems and Canopy Data Platform and technical ingestion of the upstream data into the platform.

After the product is developed and rolled out, the Consultant will serve as the SME, providing support the matching algorithms and data quality for the application.

Requirements:

  • Conducting quantitative and qualitative data analysis of new data
  • Develop, document, and test the functional requirements and processes needed to Match / Link the new data to existing Canopy data sources
  • Develop, document and test the functional requirements and processes needed to incorporate the new data into Products
  • Produce summaries of findings, insights, and recommendations. All recommendations and findings must be supported by facts/data
  • Develop expertise in Canopy business rules and provide guidance on the overall impact of business rule changes to systems (downstream impacts)
  • Understand data transformation, validation, and maintenance rules/edits
  • Understand data flows from one system to another - feeds into and extracts from the production databases
  • Recommend changes to existing business rules to improve the overall data quality and performance of data asset
  • Implement data transformation pipelines in Big Data Platform (Python is must, Spark, Scala knowledge may come handy)
  • Assisting Data Developers in establishing connectivity and automation for data sourcing
  • Data science including data querying in SQL
  • Query, process and analyze large data sets
  • Data reconciliation

Skills Required:

  • Bachelor degree in Computer Science or Software Engineering.
  • 5 - 7 years of Python experience in software/applications development with at least 3 years of experience in data analysis and solution design
  • Scala (optional)
  • Ability to write complex SQL queries (preferable experience in DWH)
  • DWH platforms

Apply for this position

Login with Google or GitHub to see instructions on how to apply. Your identity will not be revealed to the employer.

It is NOT OK for recruiters, HR consultants, and other intermediaries to contact this employer

Rootconf 2018: Infrastructure and systems security and architecture

Wondering what all the buzz about Docker, Kubernetes, Ansible, etc. is? Need more up to date information on industry trends and patterns in system architecture, security and infrastructure? HasGeek’s annual conference Rootconf helps you to understand this. Meet some of the best systems engineers, security experts and SRE at Rootconf and use this knowledge as power to advance your career.

As a Hasjob user, we have a special deal for you: 10% off on conference tickets. No hassle of adding discount code manually: just click on the buttons below and the discount will be auto-applied when you select a ticket.

Also, don't forget to checkout the workshops we have lined up as a part of this. Use this opportunity to upskill yourself and land a better job.