Over 4 years
· Design, develop, and maintain scalable data pipelines for efficient data extraction, transformation, and loading (ETL) processes.
· Architect and implement data storage solutions, including data warehouses, data lakes, and data marts, aligned with business needs.
· Implement robust data quality checks and data cleansing techniques to ensure data accuracy and consistency.
· Optimize data pipelines for performance, scalability, and cost-effectiveness.
· Collaborate with data analysts and data scientists to understand data requirements and translate them into technical solutions.
· Develop and maintain data security measures to ensure data privacy and regulatory compliance.
· Automate data processing tasks using scripting languages (Python, Bash) and big data frameworks (Spark, Hadoop).
· Monitor data pipelines and infrastructure for performance and troubleshoot any issues.
· Stay up to date with the latest trends and technologies in data engineering, including cloud platforms (AWS, Azure, GCP).
· Document data pipelines, processes, and data models for maintainability and knowledge sharing.
· Contribute to the overall data governance strategy and best practices.
#59/2,Heritage Building,
Kaderanahalli,
Dr. Puneeth Rajkumar Road,
Banashankari 2nd Stage,
Bangalore – 560070,
Karnataka, INDIA
© All rights reserved Molecular Connections Pvt. Ltd.