Data Engineer

Atlanta, Georgia


Employer: AEG
Industry: 
Salary: Competitive
Job type: Full-Time

In order to be considered for this role, after clicking "Apply Now" above and being redirected, you must fully complete the application process on the follow-up screen.

Who are we:
A professional basketball team and state-of-the-art arena/entertainment venue that specializes in creating memorable experiences for each guest we interact with. Some of our favorite things are live sports, concerts, comedy shows, family shows, and most any other world-class event you can think of, and we're looking for someone who shares the same interests. We live for the fast-paced world of sports & live entertainment, and as such, we work hard, run fast, execute flawlessly, and party it up when it all comes together. Lastly, we strive to deliver wonderful experiences that create lasting memories, and we prefer to surround ourselves with those who are the best at what they do.

Who are you:
An enthusiastic lover of sports, live entertainment, and people. You have true passion for engaging in meaningful interactions and creating memorable experiences for all guests. You strive to be helpful, engaging, and knowledgeable of all things Atlanta Hawks and State Farm Arena. You enjoy being a part of an exciting and dynamic group, and you're committed to continuously enhancing the productivity and effectiveness of your team. Lastly, you enjoy working hard and celebrating hard, and you'd be shocked if guests weren't positively impacted by their interactions with you.

Job Summary:

The Data Engineer is responsible for building data pipelines leveraging modern coding principles and cloud technologies. The role is responsible for building and maintaining data pipelines related to data transformation, data quality, metadata management, reporting, and analytics. The role is expected to ensure ETL pipelines are built based on an established multi-layered architecture. The Data Engineer is expected to be proficient in SQL and Python or any object-oriented programming language.

What you will do: (Responsibilities)
  • Design, develop, and maintain robust data pipelines to ingest, process, and transform large volumes of data from various sources.
  • Implement data processing frameworks and tools to support batch and real-time data processing needs.
  • Build and optimize data warehouse and data lake solutions for efficient storage and retrieval of structured and unstructured data.
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.
  • Perform data profiling, cleansing, and validation to ensure data quality and integrity.
  • Monitor and troubleshoot data pipelines to identify and resolve performance bottlenecks and issues.
  • Implement data security and privacy measures to protect sensitive information and ensure compliance with regulations (e.g., GDPR, HIPAA).
  • Stay up-to-date with emerging technologies and best practices in data engineering, and make recommendations for continuous improvement.


What we need from you: (Required Skills/Abilities)

Technical Skills and Experience

Required:
  • Experience with relational and dimensional model development
  • Strong experience with both real-time and batch-based data pipelines
  • Experience with Databricks and knowledge of Apache Spark
  • Experience with Azure Data Factory for orchestrations and data transformations
  • Experience in Microsoft Azure cloud computing services
  • High proficiency in programming languages such as Python, Java, or Scala.
  • Strong SQL skills and experience working with relational and NoSQL databases
  • Experience with CI/CD practices
  • Experience with Github or other version control solutions

Important:
  • Strong experience with any Cloud Platforms (Azure, GCP, AWS)

Desirable, but not required:
  • Experience in technology consulting
  • Experience working in dynamic companies using modern data engineer practices
  • Data science based statistical modelling approaches
  • Experience with Apache Airflow, Kubernetes, Terraform
  • Hands-on experience with big data technologies like Hadoop, Spark, Kafka, etc.
  • Knowledge of graph database development (RDF, OWL)


Nontechnical Skills

Required:
  • Able to ramp-up quickly. Takes initiative and is a self-starter
  • Strong communication skills
  • Strong interpersonal skills
  • Can zoom from big picture to detail
  • Relevant domain experience

Important:
  • Creative thinking, problem solving, and decision making
  • Experience in position's region
  • General IT knowledge
  • Can interact with SMEs from various departments
  • Open to alternative ways of thinking

Desirable but not required:
  • Can work and collaborate effectively remotely or in-person
  • Experience with appropriate organization profile (size and complexity)
  • 2nd language


Education and Experience:
  • Bachelors degree in Computer Science or equivalent work experience.
  • Masters degree in Computer Science, MIS is a plus


Physical Requirements:
  • Prolonged periods sitting at a desk and working on a computer


We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, age, disability, gender identity, marital or veteran status, or any other protected class.

If this opportunity looks exciting to you, please complete the application process. Go Hawks!

Created: 2024-06-20
Reference: 2084148
Country: United States
State: Georgia
City: Atlanta
ZIP: 30334



Similar jobs: