Logo for Anika Systems

Data Analyst

Roles & Responsibilities

  • Bachelor's degree in Computer Science, Information Systems, Data Science, Engineering, Mathematics, or a related technical field
  • 3+ years of experience in data engineering, data analytics, or a closely related discipline
  • Strong proficiency in SQL, including complex joins, window functions, CTEs, and query performance tuning against large datasets
  • Hands-on experience with PySpark and Python, and working within Databricks (notebooks, jobs, clusters, Unity Catalog)

Requirements:

  • Design, build, and maintain scalable ETL/ELT data pipelines using PySpark and Python within Databricks environments; automate data ingestion from APIs, flat files, relational databases, and streaming feeds; monitor pipeline health and data quality
  • Develop and optimize SQL queries and data models to support analytical and reporting workloads, including performance tuning for large datasets
  • Conduct exploratory data analysis and develop self-service dashboards and reports using Databricks SQL, Tableau, or Power BI
  • Collaborate with data scientists, program analysts, IT engineers, and agency stakeholders; document pipelines, data models, and notebooks; participate in Agile ceremonies

Job description

Anika Systems is an outcome-driven technology solutions firm that guides federal agencies in solving complex business challenges and preparing for the future. Our services span AI Strategy, Data Intelligence, AI & Machine Learning, Intelligent Automation, Enterprise Platforms and Engineering, with a specialized focus on National Security and Federal Financial programs. We are dedicated to delivering forward-thinking solutions that accelerate the critical missions of our government clients.  This position is 100% remote.

Position Summary
We are in search of a highly collaborative and experienced Data Analyst to support the Office of Chief Data Officer (OCDO) and the Office of Performance Quality (OPQ) for a federal government contract. In this role, you will design and maintain robust data pipelines, perform in-depth analysis of large-scale datasets, and deliver actionable insights that drive mission decisions. You will work within a Databricks environment leveraging SQL, PySpark, and Python, to transform raw agency data into reliable, governed, and analytics-ready assets. The ideal candidate combines strong engineering fundamentals with analytical acumen and is comfortable operating within complex federal data environments. 

Candidates must be a U.S. Citizen with the ability to obtain and maintain a government suitability clearance.

Key Responsibilities
Data Engineering & Pipeline Development
  • Design, build, and maintain scalable ETL/ELT data pipelines using PySpark and Python within Databricks environments.
  • Develop and optimize SQL queries, and data models to support analytical and reporting workloads.
  • Automate data ingestion workflows from disparate agency sources including APIs, flat files, relational databases, and streaming feeds.
  • Monitor pipeline health, resolve data quality issues, and implement alerting and logging to ensure reliability of data products.
  • Collaborate with data architects to design and enforce data schemas, partitioning strategies, and performance optimization practices.
Data Analysis & Reporting
  • Conduct exploratory data analysis to identify trends, anomalies, and opportunities for improvement.
  • Develop self-service analytics dashboards and reports using Databricks SQL, Tableau, or Power BI.
  • Write complex, performant SQL queries against large datasets to answer ad hoc analytical requests from program managers and leadership.
  • Translate business questions into clearly scoped analytical tasks and deliver findings as data visualizations, written summaries, or briefings.
Collaboration & Stakeholder Support
  • Work closely with data scientists, program analysts, IT engineers, and agency stakeholders to understand data needs and deliver tailored solutions.
  • Document pipelines, data models, and analytical notebooks to support knowledge transfer, peer review, and audit readiness.
  • Participate in Agile sprint ceremonies, contribute to backlog grooming, and deliver iterative data products aligned with program priorities.
Required Qualifications
  • Bachelor's degree in Computer Science, Information Systems, Data Science, Engineering, Mathematics, or a related technical field.
  • 3+ years of experience in data engineering, data analytics, or a closely related discipline.
  • Demonstrated experience on federal government programs or supporting a federal agency data environment.
  • Strong proficiency in SQL — including complex joins, window functions, CTEs, and query performance tuning against large datasets.
  • Hands-on experience with PySpark for distributed data processing, transformations, and optimization techniques.
  • Proficiency in Python for scripting, data manipulation, and automation.
  • Direct experience working within Databricks, including notebooks, jobs, clusters, and Unity Catalog.
  • Familiarity with data lakehouse concepts including Delta Lake, bronze/silver/gold architecture, and medallion design patterns.
  • Experience with version control systems (Git/GitHub/GitLab) and collaborative development workflows.
Preferred Qualifications
  • Databricks Certified Associate Developer for Apache Spark or Databricks Certified Data Engineer Associate/Professional.
  • Experience with cloud platforms such as AWS GovCloud, Microsoft Azure Government, or Google Cloud for Government.
  • Familiarity with CI/CD practices for data pipelines, including automated testing and deployment using tools like Azure DevOps or GitHub Actions.
  • Working knowledge of data visualization platforms (Tableau, Power BI) and experience connecting them to Databricks SQL endpoints.
  • Familiarity with Unity Catalog for data access control, lineage, and governance within Databricks.

Data Analyst Related jobs

Other jobs at Anika Systems

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.