Senior Data Software Engineer (25001023) (Budapest)

Senior Data Software Engineer (25001023) (Budapest)
Jelentkezem az állásra ›

 

Senior Data Engineer

About Digital Factory

It’s a place where the agility of a start-up meets the stability of a global powerhouse! 

Be part of our journey as we transform MOL Group’s Consumer Services from a traditional fuel retailer to a cutting-edge, digitally driven consumer goods and mobility services provider. As part of MOL Group’s 2030 - Enter Tomorrow strategy, our mission is to lead the market in Central Europe through innovative digital transformation. 

At Digital Factory, we harness the power of data to drive business decisions by building a comprehensive Data Lake and developing advanced analytics solutions in forecasting, category management, pricing, and workforce optimization. We're also revolutionizing customer experience with MOL MOVE, our new digital, app-first loyalty rewards program on the Salesforce platform, designed to personalize and enhance convenience for our customers. 

MOL Group Consumer Services are currently serving approx. 10 million customers through 2,000 service stations across 9 countries, which provides an excellent scale to leverage our technologies regionally.

Our engineers are innovators with a passion for business, user behavior, and data. They excel in communication, build strong relationships with non-engineers, and navigate product/engineering tradeoffs with ease. With a focus on quick product validation and end-to-end feature ownership, they drive continuous learning and impactful solutions.

If you're curious, proactive, and eager to make a difference, the Digital Factory's Engineering Team is the place for you!

 

About the Role

We are looking to hire an experienced and passionate Senior Data Engineer to join Digital Factory. The new team member is part of our digital engineering team, and the focus of this role is on leveraging MS SQL and Databricks' capabilities for data engineering, analytics, and machine learning in a cloud-native environment.

 

Main Responsibilities

  • Build scalable ETL/ELT pipelines using Azure Databricks to integrate data sources like Azure Blob Storage, Azure Data Lake, and SQL Databases.
  • Automate data workflows, including ingestion, transformation, and loading processes, using Azure Data Factory (ADF), Databricks workflows, and Azure Functions.
  • Implement real-time streaming data pipelines with Databricks Structured Streaming for IoT or event-driven architectures.
  • Use Apache Spark in Databricks for efficient data cleaning, transformation, and enrichment of large datasets.
  • Design and implement dimensional data models (e.g., star schema) for optimized performance in Azure Synapse Analytics.
  • Leverage Delta Lake for ACID-compliant data storage and optimization for batch and stream processing.
  • Tune Databricks notebooks and jobs for performance, utilizing autoscaling and optimizing Spark configurations.
  • Apply data partitioning, manage table storage formats (e.g., Parquet, Delta), and index data for faster querying.
  • Manage Databricks clusters, ensuring resource efficiency through autoscaling, sizing, and cost management.
  • Integrate Databricks with Azure services such as Azure Data Lake Storage, Azure SQL, Azure Synapse, Azure Key Vault, and Power BI.
  • Set up monitoring, alerting, and logging for data pipelines using Azure Monitor, Databricks Jobs, and Azure Log Analytics.
  • Ensure data security through encryption, role-based access control (RBAC), Azure Active Directory (AAD), and Databricks Secrets.
  • Implement data quality checks in Databricks pipelines to ensure data accuracy and consistency.
  • Use Git for version control and implement CI/CD pipelines for Databricks workflows via Azure DevOps.
  • Collaborate with data scientists, analysts, and business users to deliver data-driven insights and solutions.
  • Maintain documentation of data architectures, processes, and pipelines in DF Confluence for scalability and collaboration.
  • Support exploratory data analysis and integrate Databricks with Power BI or other visualization tools.
  • Collaborate with data science teams to deploy machine learning models using Databricks MLflow and Azure ML services

 

About Your Experience and Skills

  • EducationUniversity or college degree with specialization in Computer Science or Information Systems or equivalent
  • Experience5+ years of experience in DWH Development, Data Engineering 
  • Retail IT domain background is a plus.
  • Professional and Technical CompetenciesAgile DevOps (strategies, processes, and tooling)
  • Extensive proficiency in SQL including complex querying, optimization, performance tuning, and database administration tasks.
  • Mastery of ETL tools and frameworks such as Informatica, Talend, or SSIS (SQL Server Integration Services) for designing and developing scalable and efficient ETL processes.
  • Data modeling techniques, including dimensional modeling, star schema, and snowflake schema for scalable and optimized data structures.
  • Knowledge of big data platforms and technologies such as Hadoop, Spark, Databricks, and cloud-based data services (e.g., MS Synapse, MS Fabric).
  • Proficiency in scripting languages like Python and PowerShell for ETL automation, workflow scheduling, and data quality checks.
  • Optimizing data warehouse performance through query tuning, indexing strategies, partitioning, and caching.
  • Knowledge of data governance principles, metadata management, and implementing data quality processes.
  • Familiarity with BI and reporting tools such as Tableau, Power BI, or OBIEE for dashboards, reports, and visualizations.
  • Deep experience with cloud-based data warehousing platforms like MS Synapse Analytics Suite and Databricks or Snowflake.
  • Strong leadership skills to collaborate with cross-functional teams including business stakeholders, data scientists, and analysts. And to mentor and coach junior developers.
  • Cloud Data & Analytics technologies: Databricks, Azure ML, Databricks ML Flow, Azure Synapse Analytics, Delta Lake, Azure Data Factory (ADF), Power BI.
  • Development & Automation technologies: Python, T-SQL, ANSI SQL, CI/CD Pipelines (Azure, Databricks), Azure Monitor, Azure DevOps.
  • Sparx EA tool experience is a plus.
  • Professional usage of Microsoft Office Suite.
  • Experience with Atlassian JIRA, Confluence.
  • Soft SkillsProven analytical abilities
  • Self-driven, detail-oriented, capable of multi-tasking, highly organized, with excellent time management skills.
  • Demonstrated ability to meet deadlines, handle and prioritize simultaneous requests, and manage laterally and upwards 
  • Creative and analytical thinker with strong problem-solving skills 
  • Strong written and verbal communication skills in English for cross-functional collaboration.
  • Ability to critically evaluate information gathered from multiple sources, reconcile conflicts, decompose high-level information into details, abstract up from low-level information to a general understanding, and distinguish user requests from the underlying true needs.
  • Strong attention to detail and commitment to delivering high-quality work.
  • Proactive mindset with a positive attitude and a strong sense of ownership, and team mentality.
  • Growth mindset with a passion for continuous learning, adapting, and staying updated on industry trends and best practices.
  • Openness to work with different cultures across CEE.

 

What We Offer

  • Work with an advanced and innovative tech stack. 
  • Collaborate with multinational teams and projects across the CEE region. 
  • Enjoy tremendous opportunities for continuous learning and growth in a fast-paced, agile environment. 
  • Explore versatile career paths in a dynamically growing international organization at Local and Group level.
  • Gain cross-industry experience through our expanding portfolio of businesses (both within and beyond MOL Group). 
  • Be part of a supportive and inclusive workplace that values innovation and creativity. 
  • Enjoy a hybrid work model: two days of home office and three days at the beautiful MOL Campus. 
  • Access Medicover health insurance and an extensive benefit package, including an annual bonus and maximalized fringe benefits. 
  • Participate in regular team outings and MOL family events. 

 

How to apply?

If you wish to form the future with us, please submit your applications via our career site: www.mol.hu/karrier

 

Main Recruitment Steps

  • Initial call: You will be contacted by MOL Recruitment via phone for a short chat. 
  • Interview: You can expect two or a maximum of three interview rounds, where you can meet your direct manager, some experts from the team, ask your questions and present your skills and career path. We always make sure you receive timely feedback after each round. 
  • Technical Assessment: You will either receive a take-home assignment after the first interview or be asked to solve an exercise on the spot at one of the interviews.

 

If you would like to know more about Digital Factory: digitalfactory.info

Meet your Hiring Manager: Zsolt Horváth (https://www.linkedin.com/in/hozso/)

 

At MOL Group, we know that our strength lies in diversity. During the selection process, we provide equal opportunities to all applicants with the appropriate qualifications and work experience, regardless of age, gender, disability, or reduced work capacity, sexual orientation, or ethnicity. At MOL Group, everyone matters.

If you have any specific needs related to your reduced work capacity at any stage of the recruitment process, please inform our recruitment team! We are happy to assist.

Budapest

Küldjünk emailt, ha hasonló hirdetés kerül az oldalra?

Ne maradj le egy jó állásról!

Állásajánlatok - legnépszerűbb városok