Data Engineer
Chicago, Illinois
Employer: Rush Hospital
Industry:
Salary: Competitive
Job type: Full-Time
Location: Chicago, IL
Hospital: RUSH University Medical Center
Department: ORA Administration
Work Type: Full Time (Total FTE between 0.9 and 1.0)
Shift: Shift 1
Work Schedule: 8 Hr (8:00:00 AM - 5:00:00 PM)
Summary:
The Data Engineer is responsible for designing and implementing data pipelines for cloud projects. This position will require working with complex data sources and transforming it into something useful for analysts. Exemplifies the Rush mission, vision and values and acts in accordance with Rush policies and procedures.
Other information:
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
Responsibilities:
• Responsible for design, development and maintenance of data pipelines to enable data analysis and reporting.
• Builds, evolves and scales out infrastructure to ingest, process and extract meaning out data.
• Write complex SQL queries or python code to support analytics needs.
• Manage projects / processes, working independently with limited supervision
• Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses.
• Combines, optimizes, and manages multiple big data sources.
• Builds data infrastructure and determines proper data formats to ensure data is ready for use.
Rush is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other legally protected characteristics.
Hospital: RUSH University Medical Center
Department: ORA Administration
Work Type: Full Time (Total FTE between 0.9 and 1.0)
Shift: Shift 1
Work Schedule: 8 Hr (8:00:00 AM - 5:00:00 PM)
Summary:
The Data Engineer is responsible for designing and implementing data pipelines for cloud projects. This position will require working with complex data sources and transforming it into something useful for analysts. Exemplifies the Rush mission, vision and values and acts in accordance with Rush policies and procedures.
Other information:
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
Responsibilities:
• Responsible for design, development and maintenance of data pipelines to enable data analysis and reporting.
• Builds, evolves and scales out infrastructure to ingest, process and extract meaning out data.
• Write complex SQL queries or python code to support analytics needs.
• Manage projects / processes, working independently with limited supervision
• Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses.
• Combines, optimizes, and manages multiple big data sources.
• Builds data infrastructure and determines proper data formats to ensure data is ready for use.
Rush is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other legally protected characteristics.
Created: 2024-06-04
Reference: 9581
Country: United States
State: Illinois
City: Chicago
ZIP: 60018
Similar jobs:
-
Principal Data Engineer, HCP Data Platform
AbbVie in Mettawa, Illinois -
Python Data Engineer
Strategic Staffing Solutions in Chicago, Illinois -
Data Center Engineer
CBRE in Chicago, Illinois -
Sr. Data Engineer
Apex Systems in Chicago, Illinois -
Data Engineer
Verinon Technology Solutions in Arlington Heights, Illinois -
Data Protection Platform Engineering Manager
Peapod Digital Labs in Chicago, Illinois -
Sr. Data Engineer
Apex Systems in Rolling Meadows, Illinois -
Senior Manager - Data Engineering & Machine Learning
United Airlines, Inc. in Chicago, Illinois -
Sr. Data Science Engineer (Remote)
AbbVie in Mettawa, Illinois -
Senior Data Engineer - Information Security Strategy and Analytics (Remote)
AbbVie in Chicago, Illinois -
Data Engineering Lead (Director)
Peapod Digital Labs in Chicago, Illinois -
Data Science Software Engineer
Medline Industries in Northfield, Illinois -
Lead Engineer - Data & Analytics
AbbVie in North Chicago, Illinois -
Data Center Engineer Consultant
Parallel Partners in Aurora, Illinois -
Lead Azure Data Engineer
Insight Global in Chicago, Illinois -
Data Engineer
Compunnel in Chicago, Illinois -
Voice and Data Network Engineer
Leidos Holding in Scott Air Force Base, Illinois💸 $55250.00 per year -
Lead Data Engineer
Sogeti in Chicago, Illinois -
Sr Data Integration Engineer
James Hardie Building Products Inc in Chicago, Illinois -
Azure Data Engineer
Compunnel in Chicago, Illinois