Data Warehouse Engineer - Intern

We harness the power of innovation so that you can change the world and help our customers solve their most complex challenges

Dubai
Office
R175994
Internship Program
Additional posting locations:
Abstract digital landscape showing a grid pattern of interconnected lines and dots on a dark background, creating a sense of depth and perspective.
In a world of possibilities, pursue one with endless opportunities. Imagine Next!

 

At Parsons, you can imagine a career where you thrive, work with exceptional people, and be yourself. Guided by our leadership vision of valuing people, embracing agility, and fostering growth, we cultivate an innovative culture that empowers you to achieve your full potential. Unleash your talent and redefine what’s possible.

 

Job Description:

Position Overview

Parsons is seeking a high-potential Data Engineer Graduate Intern to join our Technology and Innovation team. This role is designed for candidates with strong analytical foundations and an interest in building scalable, enterprise-grade data platforms that support operational, engineering, and executive decision-making.

As an intern, you will contribute to the design, development, and optimization of cloud-based data pipelines and analytics platforms, primarily within the Microsoft Azure ecosystem. You will work alongside experienced data engineers, architects, and product teams on real delivery programs, gaining exposure to enterprise data standards, governance, and DevOps practices.

Key Responsibilities

Data Processing

  • Work with frameworks like Apache Spark, Hadoop, or Apache Beam to process large datasets efficiently.
  • Support development of batch and streaming data pipelines using Python and distributed processing frameworks such as Apache Spark (Databricks)
  • Assist in processing and transforming structured and semi-structured data at scale

ETL/ELT Implementation

  • Assist in designing and implementing ETL/ELT processes for data integration and transformation.
  • Contribute to the design and implementation of ETL/ELT workflows using Azure Data Factory, Databricks, or equivalent tools
  • Support data ingestion from multiple sources (databases, APIs, files, cloud storage)

Cloud Integration & Platform (Microsoft Azure)

  • Work with Azure-native data services, including:
    • Azure Data Factory
    • Azure Synapse Analytics
    • Azure Data Lake Storage (ADLS Gen2)
    • Azure Databricks
  • Utilize cloud services such as Azure (Data Factory, Synapse, Data Lake), AWS (S3, Redshift, Glue), or Google Cloud Platform (BigQuery, Dataflow) for data storage and processing.
  • Support secure configuration of cloud resources, access controls, and data storage

Database Management:

  • Manage and query relational databases (e.g., PostgreSQL, MySQL, Oracle) and NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
  • Query and manage relational databases (Azure SQL, SQL Server, PostgreSQL, MySQL)
  • Support analytics and reporting use cases using modern data warehouse / lakehouse architectures

Data Warehousing:

  • Support the development and optimization of modern data warehouse solutions like Databricks, Snowflake, Redshift, or BigQuery.

Pipeline Orchestration

  • Build and manage workflows using orchestration tools like Apache Airflow, Prefect, or Luigi.
  • Assist with workflow orchestration using tools such as Azure Data Factory pipelines or Apache Airflow (where applicable)
  • Support scheduling, monitoring, and failure handling of data pipelines

Big Data Tools

  • Work with distributed data systems and storage solutions like HDFS or cloud-native equivalents.

Version Control

  • Collaborate with the team using Git for code versioning and management.

Debugging and Optimization

  • Diagnose and resolve performance issues in data systems and optimize database queries.

DevOps, Quality & Optimization

  • Collaborate using Git-based workflows (Azure DevOps Repos or GitHub)
  • Support data quality checks, performance tuning, and query optimization
  • Assist with documentation of data pipelines, schemas, and system design

Technical Requirements

Skill Area

Requirements

Programming

  • Proficiency in Python- Experience with scripting languages for automation
  • Solid understanding of SQL for data querying and transformation

Data Processing Frameworks

  • Hands-on experience with Apache Spark, Hadoop, or Apache Beam- Familiarity with ETL/ELT processes
  • Understanding of ETL / ELT concepts and data pipeline design

Database and Querying

  • Strong understanding of SQL- Experience with relational databases (PostgreSQL, MySQL, Oracle)
  • Experience with NoSQL databases (MongoDB, Cassandra, DynamoDB)

Cloud Platforms

- Familiarity with Microsoft Azure data services

  • Azure Data Factory
  • Azure Synapse Analytics
  • Azure Data Lake
  • Azure Databricks

(Data Factory, Synapse, Data Lake)- AWS (S3, Redshift, Glue

  • Awareness of Azure security and identity concepts (RBAC, managed identities) is advantageous

Data Warehousing

- Experience with Databricks, Snowflake, Redshift, or BigQuery

Data Pipelines and Orchestration

- Knowledge of tools like Apache Airflow, Prefect, or Luigi

Big Data Tools

- Experience with distributed data systems and storage solutions like HDFS

Version Control

- Proficiency with Git for code versioning and collaboration

Preferred Qualifications

  • Exposure to Azure DevOps or GitHub Actions
  • Familiarity with Agile / Scrum delivery environments
  • Interest in enterprise analytics, cloud platforms, and data governance
  • Awareness of data privacy and governance principles (e.g., GDPR concepts)
  • Note: Multi-cloud exposure (AWS / GCP) is beneficial but not required. The primary environment is Microsoft Azure.
  • Experience: Practical exposure to building and optimizing scalable data pipelines, batch and real-time data processing.
  • Debugging: Familiarity with diagnosing and resolving performance issues in data systems.
  • Data Governance: Understanding of data privacy regulations (e.g., GDPR, CCPA) and experience implementing data quality checks and access controls.
  • Certifications (Optional but Valuable):
    • AWS Certified Data Analytics – Specialty
    • Google Professional Data Engineer
    • Microsoft Azure Data Engineer Associate
    • Databricks Certified Data Engineer Associate

Soft Skills

  • Problem-Solving: Ability to troubleshoot complex data and system issues independently.
  • Communication: Collaborate with data analysts, scientists, and engineers to understand data needs and deliver solutions.
  • Documentation: Document data workflows, system designs, and troubleshooting procedures effectively.
  • Team Collaboration: Experience working in cross-functional teams using Agile or similar methodologies.

Education

  • Bachelor’s degree (or final-year student) in Computer Science, Data Engineering, Information Systems, Engineering, or a related field
  • Relevant projects, internships, or practical experience may substitute for formal education

Learning Opportunities

  • Hands-on experience building data pipelines in a Microsoft Azure enterprise environment
  • Exposure to lakehouse architectures, analytics platforms, and cloud security practices
  • Practical experience with Databricks, Azure Data Factory, and Synapse
  • Mentorship from senior data engineers and architects working on live programs
  • Insight into how data engineering supports large-scale infrastructure, engineering, and program delivery

Duration

  • Internship duration: 3 to 6 months with possibility of extension.

Parsons equally employs representation at all job levels no matter the race, color, religion, sex (including pregnancy), national origin, age, disability or genetic information.

We truly invest and care about our employee’s wellbeing and provide endless growth opportunities as the sky is the limit, so aim for the stars! Imagine next and join the Parsons quest—APPLY TODAY!

Parsons is aware of fraudulent recruitment practices. To learn more about recruitment fraud and how to report it, please refer to https://www.parsons.com/fraudulent-recruitment/.

COMPETITIVE BENEFIT OFFERINGS

Financial Wellness

We care about your financial wellbeing. Parsons offers competitive pay and retirement plans to help you build wealth for the future while giving you the flexibility to diversify your investments.

Work Life Harmony

Balance in life is important and time away from the office is imperative to allow you to refresh and focus your attention on the things that matter to you. Parsons supports your time away by providing paid time off and paid flexible holidays.

Career Development

We are committed to fostering the personal and professional growth of our employees. Develop and advance yourself though our comprehensive training, educational and mentorship programs.

Veteran Support

We provide Industry leading benefits to support veterans and active-duty members to provide security for you and your family by offering robust leave and benefits; including paid active-duty military leave and paid time off when transitioning back to civilian life.

Mind & Body

At Parsons we inspire healthier habits, heathier minds, and a healthier you through our wellness program. Participate in our weekly Meditation Mondays and Wellness Wednesdays. Wellness, at Parsons, is more than just your annual checkup.

Health

Health is not a one size fits all. At Parsons, we offer a robust Employee Assistance Program as well as comprehensive medical, dental and vision plans through large, national carriers with the choice of regional PPO, HDHP, or HMO networks.

Want to learn more about the benefits eligible for the Parsons’ location you are interested in? Click below to find out more!
group of people sitting in bleachers in a stadium

Join Our Talent Community

Join our Talent Community and imagine next with us!

Favorite Jobs
You've saved some roles!