FSS Experience Needed:- 8 + Years Experience(Developer-Standard) Rel exp: 8 Years Location:- Bangalore / Hyderabad Mandatory skills:- Primary Skills: • Core AWS Data Engineering & DevOps: Python, SQL, CI/CD (Jenkins, CloudFormation), ETL/ELT, Data Lake, AWS Data Services (IAM, Glue, Crawlers, Glue Data Brew, Step Functions, Lambda, Redshift, SQS, SNS, Event Bridge, Athena, Lake Formation), Bitbucket/GitHub. Secondary Skills:- • Streaming (Kinesis, Kafka), Docker, SonarQube, Maven, Groovy . Nice to have skills: • Cloud Bees, QA integration, Agile PM, Documentation, Continuous learning Job Description: Qualifications: • Bachelor's degree in Computer Science, Software Engineering, MIS or equivalent combination of education and experience • 8+ years of experience as Data Engineer on AWS Stack with experience on DevOps tool • AWS Solutions Architect or AWS Developer Certification required • Solid experience of AWS services such as CloudFormation, S3, Athena, Glue, Glue DataBrew, EMR/Spark, RDS, Redshift, DataSync, DMS, DynamoDB, Lambda, Step Functions, IAM, KMS, SM, EventBridge, EC2, SQS, SNS, LakeFormation, CloudWatch, Cloud Trail • Implement high velocity streaming solutions and orchestration using Amazon Kinesis, AWS Managed Airflow and AWS Managed Kafka (preferred) • Solid experience building solutions on AWS data lake/data warehouse • Analyze, design, development, and implementation of data ingestion pipeline in AWS • Knowledge implementing ETL/ELT for data solutions • End-to-end data solutions (ingest, storage, integration, processing, access) on AWS • Knowledge implementing RBAC strategy/solutions using AWS IAM and Redshift RBAC model • Build & implement CI/CD pipelines for EDP Platform using CloudFormation and Jenkins • Programming experience with Python, Shell scripting and SQL • Knowledge of analyzing data using SQL Stored procedures • Build automated data pipelines to ingest data from relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift • Build Automated data pipelines to ingest data from Rest APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift. • Good Experience of DevOps practice • Experience with modern source code management and software repository systems (Bitbucket) • Experience with programming languages (Python) • Experience with scripting languages (Shell, Groovy) • Experience with API deployment for tooling integration • Experience using Jenkins, CloudBees (Pipeline as Code, Shared Libraries) • The ability to document exceptions/issues/action plans/meeting minutes/lessons learned accurately and in a timely fashion • Experience with administering DevOps tools in SaaS • Experience using DevOps Tools (SonarQube, Artifactory etc.) • Experience using build tools (Maven, MS Build and Gradle) • Experience using containers (Docker) • Experience using Atlassian suite (Jira, Confluence) • Experience of Infrastructure as Code using CloudFormation • Creating Jenkins CI pipelines to integrate Sonar/Security scans and test automation scripts • Part of DevOps QA and AWS team focusing on building CI/CD pipeline • Responsible for writing and maintaining Jenkins Pipelines - IBMFG2JP00009289
Skills:
AWS Data Engineering,CI/CD,Data Lake,DevOps,ETL/ELT,Infrastructure as Code,Python,SQL
Interested in this project and numerous others like it?