Data Engineer
Join a team that is innovating with Customer Experience (CX) to make Amazon's building network grow. We are looking for a Data Engineer to play a pivotal role in developing and implementing a comprehensive data platform for Global Engineering Services (GES).
We are seeking an experienced Data Engineer to design, implement, and support data warehouse/data lake infrastructure using AWS big data stack, Python, Redshift, QuickSight, Glue/Lake Formation, EMR/Spark, Athena, etc.
The ideal candidate will have a proven track record of building integrated data tools and platforms to support global businesses. This fast-paced, results-oriented role will involve developing the platform to support key decisions for Global Engineering Services.
You will work closely with Engineering Product and Technical Program teams as you develop forward-looking architecture and build strategies to complement the revolutionary GEIST vision.
In this role, you will have the freedom and encouragement to experiment, improve, invent, and innovate on behalf of our customers.
* Design, implement, and support data warehouse/data lake infrastructure using AWS big data stack, Python, Redshift, QuickSight, Glue/Lake Formation, EMR/Spark, Athena, etc.
* Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting.
* Creation and support of real-time data pipelines built on AWS technologies including EMR, Glue, Redshift/Spectrum, and Athena.
* Continual research of the latest big data technologies to provide new capabilities and increase efficiency.
* Manage numerous requests concurrently and strategically prioritize when necessary.
* Partner/collaborate across teams/roles to deliver results.
About the team:
The GEIST team serves as the software solution team to all Global Engineering needs, working with its customers to set a consolidated Tech strategy and prioritizing investments to best serve their needs and our bottom line.
BASIC QUALIFICATIONS:
* Experience in data engineering.
* Experience with data modeling, warehousing, and building ETL pipelines.
* Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala).
* Experience with one or more scripting language (e.g., Python, KornShell).
* Knowledge of cloud services such as AWS or equivalent.
PREFERRED QUALIFICATIONS:
* Experience with big data technologies such as Hadoop, Hive, Spark, EMR.
* Experience with any ETL tool like Informatica, ODI, SSIS, BODI, Datastage, etc.
Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and skills.