Core Tasks:
- Maintain, improve, clean, and manipulate data in the business's operational and analytics databases
- Design and deploy data platforms across multiple domains ensuring the operability
- Transform data for meaningful analyses
- Improve data efficiency, reliability, and quality
- Create data enrichment
- Build high performance
- Ensure data integrity
- Create and manage data stores at scale
- Ensure data governance - security, quality, access, and compliance
Required Qualifications:
- Experience with healthcare data
- Knowledge and understanding of health care privacy and security practices
- " 6+ years of designing, coding and supporting distributed, data-intensive systems at scale"
- " 3+ years of non-relational (NoSQL, Big Data) delivery"
- " 4+ years of development experience with BIG Data technology stack (e.g. Hadoop, MapR, Talend, PIG Scripting, HBase, HIVE, SPARK); Relational databases; and Test automation in Linux/Windows environments"
- " 3+ years in Dev Ops Automation tools (Oozie, Python, etc.)"
- " Excellent communication skills with the ability to describe data/capability stories" (and not defects) and explain value (and not resolution) to customers."
- " Experience in delivering Data Platforms"
- 6+ years of relational database delivery
- 7+ years of experience working within the Software Development Life Cycle (SDLC)
- 4+ years of experience in Agile Delivery
- BS/BA or equivalent experience
Preferred Qualifications:
- Hands-on experience in programming languages - Go, python, scala, java.
- Hands-on experience in full automated testing framework (unit & integration) cucumber, Spock, Go unit test, Junit.
- Hands-on experience of big data and streaming frameworks - Kafka, Hadoop, Hive, Spark, HDFS.
- Hands-on experience on using PAAS like - Kubernetes, Openshift
- Hands-on experience on working on CICD platform and monitoring - namely Github, Jenkins, Grafana, Prometheus.
- Experience and working exposure in a cloud environment, preferably Azure.