This post is over 30 days old. The position may no longer be available
Canopy.cloud Private Wealth Data Platform is a Big Data platform that provides critical functionality for investment bank data analytics-driven understanding, getting to know micro investment strategies and legal / compliance function.
The main objective of the project is to design and implement critical data sourcing from global upstream systems into local data platform. Project activities will include data discovery, preparing, reviewing and submitting Cross-Data Approval request, establishing formal data governance and data quality processes for data flow, capturing Critical Data Elements and mapping them to the business requirements, establishing connectivity between Source Systems and Canopy Data Platform and technical ingestion of the upstream data into the platform.
After the product is developed and rolled out, the Consultant will serve as the SME, providing support the matching algorithms and data quality for the application.
- Conducting quantitative and qualitative data analysis of new data
- Develop, document, and test the functional requirements and processes needed to Match / Link the new data to existing Canopy data sources
- Develop, document and test the functional requirements and processes needed to incorporate the new data into Products
- Produce summaries of findings, insights, and recommendations. All recommendations and findings must be supported by facts/data
- Develop expertise in Canopy business rules and provide guidance on the overall impact of business rule changes to systems (downstream impacts)
- Understand data transformation, validation, and maintenance rules/edits
- Understand data flows from one system to another - feeds into and extracts from the production databases
- Recommend changes to existing business rules to improve the overall data quality and performance of data asset
- Implement data transformation pipelines in Big Data Platform (Python is must, Spark, Scala knowledge may come handy)
- Assisting Data Developers in establishing connectivity and automation for data sourcing
- Data science including data querying in SQL
- Query, process and analyze large data sets
- Data reconciliation
- Bachelor degree in Computer Science or Software Engineering.
- 5 - 7 years of Python experience in software/applications development with at least 3 years of experience in data analysis and solution design
- Scala (optional)
- Ability to write complex SQL queries (preferable experience in DWH)
- DWH platforms
Apply for this position
Login with Google or GitHub to see instructions on how to apply. Your identity will not be revealed to the employer.
It is NOT OK for recruiters, HR consultants, and other intermediaries to contact this employer
Welcome to Hasjob!
Since 2011, Hasjob has been the place for Indian tech startups to list job opportunities. This is where startups hire before they become big and famous rocketships. All jobs on Hasjob are posted directly by founders or core team members. We do not accept listings from third-party recruiters.
Apply to a job here to join the startup scene in Bangalore, Delhi/NCR, Mumbai, Pune, Chennai, Hyderabad or one of the many other cities.
You can browse all jobs posted in the last 30 days, but logging in is recommended to make the most of Hasjob. Your identity is safe with us.