Landmark
Data Architect
Remote Data Engineer role with clear candidate location fit.
PostedApr 16, 2026
Eligible countries5
SeniorityArchitect, Lead
DepartmentEngineering
Candidate locations
ArgentinaBrazilColombiaGuatemalaMexico
Role overview
Data Architect
Requirements and responsibilities
Readable role content extracted into sections for faster review.
About the role
As a Cloud Data Architect, you’ll lead the design of a next-generation AWS data platform from the ground up — building the Medallion Lakehouse architecture that will become the unified data foundation for the entire organization. This is a greenfield opportunity for a senior architect who wants genuine ownership: shaping ingestion patterns, governance frameworks, and scalable data models across S3, Redshift, Glue, dbt, and Airflow.
What you will do
- Lead design and implementation of enterprise data architecture following Medallion framework;
- Develop data models, pipelines, and schemas for data unification, governance, and analytics;
- Build and optimize AWS-based data infrastructure using core services;
- Implement Lakehouse architecture integrating structured and unstructured data;
- Ensure performance, scalability, and cost efficiency of the data ecosystem;
- Design and oversee data ingestion from multiple on-prem and cloud systems;
- Define integration strategies using batch, streaming, and event-driven patterns;
- Collaborate with governance teams to enforce data security, access control, and quality;
- Work with engineering and business teams to deliver scalable data solutions;
- Support data platform modernization initiatives and roadmap execution;
- Document architecture, standards, and data flows for diverse audiences.
Must haves
- 8+ years of experience in data engineering or architecture;
- 5+ years of experience working with AWS data ecosystems;
- Expertise with AWS core services (S3, Glue, Redshift, Lambda, Athena, Kinesis, EMR);
- Experience with data integration tools (dbt, Step Functions, Airflow, AWS DMS, Kafka, Kinesis);
- Strong knowledge of data modeling (Medallion/Lakehouse, dimensional modeling, Data Vault, Kimball);
- Experience with infrastructure as code (Terraform, CloudFormation);
- Proficiency in Python, SQL, and Spark (PySpark) for ETL;
- Strong understanding of data governance, quality, and security;
- Experience with enterprise-scale architecture and data integration;
- Upper-intermediate English level.
Nice to haves
- Practical experience and certification with Boomi AtomSphere.
Tech stack
Use these tags to compare similar remote roles.
Location eligibility
Candidates should apply only when their profile country is listed here.
Hiring flow
Applications are saved in WithMira for review and follow-up.
1Apply with your profile and resume snapshot.
2Recruiter reviews your fit for this position.
3Messages and referral status stay attached to this role.