GCP Data Architect

Roma 07-12-2025

GCP Data Architect

Neurons Lab Roma 07-12-2025
Riassunto

Località

Roma

Divisione Aziendale

Tipo di contratto

Data di pubblicazione

07-12-2025

Descrizione Lavoro

Apply for the GCP Data Architect role at Neurons Lab. Join us as a Senior GCP Data Architect working on banking data lake and reporting systems for large financial institutions.
About The Project
End-to-end role covering presales, architecture and implementation. Start with gathering requirements, designing solutions, establishing governance frameworks, then progress to implementing your designs through to MVP delivery.
Our Focus
Banking and Financial Services clients with stringent regulatory requirements (Basel III, MAS TRM, PCI‑DSS, GDPR). Architect data lake solutions for AML reporting, KYC data management, and regulatory compliance.
Your Impact

Design end‑to‑end data architectures combining GCP data services (BigQuery, Dataflow, Data Catalog, Dataplex) with on‑premise systems (e.g., Oracle).
Establish data governance frameworks with cataloging, lineage and quality controls.
Implement data pipelines, governance tooling and deliver working MVPs for mission‑critical banking systems.

Duration & Reporting
Part‑time long‑term engagement with project‑based allocations. Direct report to Head of Cloud.
Objective

Architecture Excellence: Design data lake architectures, create technical specifications, lead requirements gathering and solution workshops.
MVP Implementation: Build your designs – implement data pipelines, deploy governance frameworks, deliver working MVPs with data quality.
Data Governance: Establish and implement comprehensive governance frameworks including metadata management, data cataloging, data lineage, and data quality standards.
Client Success: Own the full lifecycle from requirements to MVP delivery, ensuring secure, compliant, scalable solutions aligned with banking regulations and GCP best practices.
Knowledge Transfer: Create reusable architectural patterns, data governance blueprints, implementation code and documentation.

KPI

Design data architecture documentation and governance framework.
Deliver MVP from architecture to working implementation.
Establish data governance implementations including metadata catalogs, lineage tracking, and quality monitoring.
Achieve 80%+ client acceptance rate on proposed data architectures and technical specifications.
Implement data pipelines with data quality and comprehensive monitoring.
Create reusable architectural patterns and IaC modules for banking data lakes and regulatory reporting systems.
Document solutions aligned with banking regulations (Basel III, MAS TRM, AML/KYC requirements).
Deliver cost models and ROI calculations for data lake implementations.

Areas of Responsibility
Phase 1: Data Architecture & Presales

Elicit and document requirements for data lake, reporting systems, analytics platforms.
Design end‑to‑end data architectures: ingestion patterns, storage strategies, processing pipelines, consumption layers.
Create architecture diagrams, data models (dimensional, data vault), technical specifications, placement roadmaps.
Data Governance Design: metadata management frameworks, cataloging strategies, lineage implementations, quality monitoring.
Evaluate technology options and recommend optimal GCP & On‑Premises data services for specific banking use cases.
Calculate ROI, TCO, cost‑benefit analysis for data lake implementations.
Banking Domain: design solutions for AML reporting, KYC data management, regulatory compliance, risk reporting.
Hybrid Cloud Architecture: integration patterns between GCP and on‑premise platforms (Oracle, SQL Server).
Security & compliance architecture: IAM, VPC Service Controls, encryption, data residency, audit logging.
Participate in presales activities: technical presentations, client workshops, demos, proposal support.
Create detailed implementation roadmaps and technical specifications for development teams.

Phase 2: MVP Implementation & Delivery

Build production data pipelines based on approved architectures.
Implement data warehouses: schema creation, partitioning, clustering, optimization, security setup.
Deploy data governance frameworks: Data Catalog configuration, metadata tagging, lineage tracking, quality monitoring.
Develop data ingestion patterns from on‑premise systems.
Write production‑grade data transformations, validation, and business logic.
Develop Python applications for data processing automation, quality checks, orchestration.
Build data quality frameworks with validation rules, anomaly detection, alerting.
Create sample dashboards and reports for business stakeholders.
Implement CI/CD pipelines for data pipeline deployment using Terraform.
Deploy monitoring, logging, alerting for data pipelines and workloads.
Performance tuning and cost optimization for production data workloads.
Document implementation details, operational runbooks, and knowledge transfer materials.

Skills & Knowledge
Certifications & Core Platform

GCP Professional Cloud Architect (preferred, not mandatory).
GCP Professional Data Engineer.
Core GCP data services: BigQuery, Dataflow, Pub/Sub, Data Catalog, Dataplex, Dataform, Composer, Cloud Storage, Data Fusion.

Must‑Have Technical Skills

Data Architecture (expert level) – data lakes, lakehouses, data warehouses, modern data architectures.
Data Governance (expert level) – metadata management, cataloging, lineage, data quality frameworks, hands‑on implementation.
SQL (advanced‑expert) – production‑grade queries, transformations, window functions, CTEs, optimization, tuning.
Data Modeling (expert level) – dimensional modeling, data vault, ER, schema design patterns for banking.
ETL/ELT Implementation – production data pipelines using Dataflow (Beam), Dataform, Composer, orchestration.
Python – production data applications, pandas/numpy, automation, scripting, testing.
Data Quality – validation frameworks, monitoring, anomaly detection, automated testing.

BFSI Domain Knowledge (MANDATORY)

Banking data domains: AML, KYC, regulatory reporting, risk management.
Financial regulations: Basel III, MAS TRM, PCI‑DSS, GDPR.
Understanding of banking data flows, reporting requirements, compliance frameworks.
Experience with banking data models and financial services data architecture.

Strong Plus

On‑premise data platforms: Oracle, SQL Server, Teradata.
Data quality tools: Great Expectations, Soda, dbt tests, custom validation frameworks.
Visualization tools: Looker, Looker Studio, Tableau, Power BI.
Infrastructure as Code: Terraform for GCP data services.
Streaming data processing: Pub/Sub, Dataflow streaming, Kafka integration.
Vector databases and search: Vertex AI Vector Search, Elasticsearch (GenAI use cases).

Communication

Advanced English (written & verbal).
Client‑facing presentations, workshops, requirement gathering sessions.
Technical documentation and architecture artifacts (diagrams, specifications, data models).
Stakeholder management and cross‑functional collaboration.

Experience

7+ years in data architecture, data engineering or solution architecture roles.
4+ years hands‑on with GCP data services (BigQuery, Dataflow, Data Catalog, Dataplex).
3+ years in data governance (mandatory).
3+ years in BFSI/Banking domain (mandatory).
5+ years with SQL and relational databases – optimization, tuning.
3+ years in data modeling – dimensional modeling, data vault, other methodologies.
2+ years in presales/architecture roles – requirements, solution design, client presentations.
Experience with on‑premise data platforms (mandatory) – Teradata, Oracle, SQL Server integration with cloud.

#J-18808-Ljbffr

Condividi

Come Candidarsi

Per maggiori informazioni e per candidarti, clicca il pulsante.