Total Years of Experience: 5+ years. Relevant Years of Experience:5+ years in data engineering with hands-on experience in operations domains such as supply chain and manufacturing. Mandatory Skills: • Python • Apache Spark • SQL • Snowflake • Data pipeline & ETL development Alternate Skills: • Tableau or Power BI (reporting and visualization) • PySpark • AWS SageMaker • Cloudera MLflow • SAP, SAP HANA, Teamcenter (plus but not mandatory) Good to Have (Not Mandatory): • Life sciences manufacturing equipment domain knowledge • Experience with AI/ML tools for data cleansing and transformation • Familiarity with big data platforms and cloud-based data architectures • dbt (Data Build Tool) • Strong domain expertise in operations (supply chain, manufacturing) Detailed Job Description: Design, develop, and optimize data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, and dbt. Apply domain expertise in operations, particularly in supply chain andmanufacturing, to tailor solutions. Utilize big data frameworks to process large operational datasets efficiently, ensure data quality through profiling and validation, and collaborate with cross-functional teams to deliver actionable insights. Monitor and improve data pipeline performance, maintain data integrity, and stay updated with emerging technologies. Location: Bangalore. We have 2 open position for 7A band. Resource should be available for F2F Interview in IBM location based on accountrequest and Day 1 reporting post OB. RTH-Y. - U2XJNM
Skills:
Interested in this project and numerous others like it?