- Design and maintain scalable ETL pipelines with DBT and SQL, ensuring high performance and reliability.
- Develop advanced DBT workflows using artifact files, graph variables, and complex macros leveraging run_query.
- Implement multi-repo or mesh DBT setups to support scalable and collaborative workflows.
- Utilize DBT Cloud features such as documentation, Explorer, CLI, and orchestration to optimize data processes.
- Build and manage CI/CD pipelines to automate and enhance data deployment processes.
- Write and optimize complex SQL queries to transform large datasets and ensure data accuracy.
- Collaborate with cross-functional teams to integrate data solutions into existing workflows.
- Troubleshoot and resolve errors in pipelines caused by DBT code or transformation issues.
- Adhere to best practices for version control using git flow workflows to manage and deploy code changes.
- Ensure code quality and maintainability by implementing code linting and conducting code reviews.
- 8+ years of experience in data engineering with a strong focus on ETL processes and data pipeline management.
- MUST have experience in Azure cloud, working on Data warehousing involving ADF, Azure Data Lake, DBT and Snowflake
- At least 4+ years of hands-on experience with DBT.
- Advanced proficiency in SQL and data modeling techniques.
- Deep understanding of DBT, including artifact files, graph usage, and MetricFlow.
- Proficiency in DBT Cloud features like CLI, orchestration, and documentation.
- Strong skills in Python for scripting and automation tasks.
- Familiarity with CI/CD pipeline tools and workflows.
- Hands-on experience with git flow workflows for version control.
- Solid troubleshooting skills to resolve pipeline errors efficiently.
- Knowledge of pipeline orchestration and automation.
- A proactive problem-solver with excellent attention to detail.
- Strong communication and collaboration skills to work with cross-functional teams.
- A positive attitude and ownership mindset to drive projects to completion.