Data Engineer - Backend Developer

Washington, District of Columbia


Employer: NuWave Solutions
Industry: Engineering
Salary: Competitive
Job type: Full-Time

Overview

BigBear.ai is seeking a Data Engineer, Backend Developer to support a program in the Washington DC metro area. This position will work on site 5 days per week in an office located in the National Capital Region (there will be some travel requirements). This position requires an active TS/SCI clearance.

This is an ideal opportunity to be part of one of the fastest growing AI/ML companies in the industry. At BigBear.ai, we're in this business together. We own it, we make it thrive, and we enjoy the challenges of our work. We know that our employees play the largest role in our continual success. That is why we foster an environment of growth and development, with an emphasis on opportunity, recognition, and work-life balance. We give the same high level of commitment to our employees that we give to our clients. If BigBear.ai sounds like the place where you want to be, we'd enjoy speaking with you.



What you will do

  • Design, develop, and implement end-to-end data pipelines, utilizing ETL processes and technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.
  • Create and optimize data pipelines from scratch, ensuring scalability, reliability, and high-performance processing.
  • Perform data cleansing, data integration, and data quality assurance activities to maintain the accuracy and integrity of large datasets.
  • Leverage big data technologies to efficiently process and analyze large datasets, particularly those encountered in a federal agency.
  • Troubleshoot data-related problems and provide innovative solutions to address complex data challenges.
  • Implement and enforce data governance policies and procedures, ensuring compliance with regulatory requirements and industry best practices.
  • Work closely with cross-functional teams to understand data requirements and design optimal data models and architectures.
  • Collaborate with data scientists, analysts, and stakeholders to provide timely and accurate data insights and support decision-making processes.
  • Maintain documentation for software applications, workflows, and processes.
  • Stay updated with emerging trends and advancements in data engineering and recommend suitable tools and technologies for continuous improvement.


What you need to have

  • Bachelor's Degree and 7+ years of experience; (in lieu of Bachelor’s degree, 6 additional years of relevant experience)
  • Clearance: Must hold an active TS/SCI clearance
  • Minimum of 7 years of experience as a Data Engineer, with demonstrated experience creating data pipelines from scratch.
  • High level of proficiency in ETL processes and demonstrated, hands-on experience with technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.
  • Strong problem-solving skills and ability to solve complex data-related issues.
  • Demonstrated experience working with large datasets and leveraging big data technologies to process and analyze data efficiently.
  • Understanding of data modeling/visualization, database design principles, and data governance practices.
  • Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.
  • Detail-oriented mindset with a commitment to delivering high-quality results.
  • Must be in the DC Metro area and available to work onsite5 days per week.
  • Recent DoD or IC-related experience.


What we'd like you to have

  • Knowledge of Qlik/Qlik Sense, QVD/QlikView, and Qlik Production Application Standards (QPAS) is a significant plus.
  • Previous experience with Advana is a plus.


About BigBear.ai

BigBear.ai delivers AI-powered analytics and cyber engineering solutions to support mission-critical operations and decision-making in complex, real-world environments. BigBear.ai’s customers, which include the US Intelligence Community, Department of Defense, the US Federal Government, as well as customers in manufacturing, healthcare, commercial space, and other sectors, rely on BigBear.ai’s solutions to see and shape their world through reliable, predictive insights and goal-oriented advice. Headquartered in Columbia, Maryland, BigBear.ai is a global, public company traded on the NYSE under the symbol BBAI. For more information, please visit: http://bigbear.ai/ and follow BigBear.ai on Twitter: @BigBearai.

What you will do

  • Design, develop, and implement end-to-end data pipelines, utilizing ETL processes and technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.
  • Create and optimize data pipelines from scratch, ensuring scalability, reliability, and high-performance processing.
  • Perform data cleansing, data integration, and data quality assurance activities to maintain the accuracy and integrity of large datasets.
  • Leverage big data technologies to efficiently process and analyze large datasets, particularly those encountered in a federal agency.
  • Troubleshoot data-related problems and provide innovative solutions to address complex data challenges.
  • Implement and enforce data governance policies and procedures, ensuring compliance with regulatory requirements and industry best practices.
  • Work closely with cross-functional teams to understand data requirements and design optimal data models and architectures.
  • Collaborate with data scientists, analysts, and stakeholders to provide timely and accurate data insights and support decision-making processes.
  • Maintain documentation for software applications, workflows, and processes.
  • Stay updated with emerging trends and advancements in data engineering and recommend suitable tools and technologies for continuous improvement.


What you need to have

  • Bachelor's Degree and 7+ years of experience; (in lieu of Bachelor’s degree, 6 additional years of relevant experience)
  • Clearance: Must hold an active TS/SCI clearance
  • Minimum of 7 years of experience as a Data Engineer, with demonstrated experience creating data pipelines from scratch.
  • High level of proficiency in ETL processes and demonstrated, hands-on experience with technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.
  • Strong problem-solving skills and ability to solve complex data-related issues.
  • Demonstrated experience working with large datasets and leveraging big data technologies to process and analyze data efficiently.
  • Understanding of data modeling/visualization, database design principles, and data governance practices.
  • Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.
  • Detail-oriented mindset with a commitment to delivering high-quality results.
  • Must be in the DC Metro area and available to work onsite5 days per week.
  • Recent DoD or IC-related experience.

Created: 2024-08-22
Reference: 3860
Country: United States
State: District of Columbia
City: Washington
ZIP: 20010


Similar jobs: