Job Description
Summary
Description
- Form a firm understanding of our instructional content and engineering strategies, goals, and KPIs.
- Work with colleagues and cross-functional partners to ensure the quality and stability of our data pipelines.
- Propose and drive data program initiatives and help plan and forecast scope and timelines for delivery.
- Build data visualizations that will help to answer key questions about our native client application and web deliverables performance and usage.
- Partner with our software engineering and content strategy teams to determine the efficacy, impacts, and reach of our instructional content and software.
- Ensure data and reporting quality and accuracy by root causing and remediating issues in collection, storage, and reporting.
Minimum Qualifications
- Bachelor’s or master’s degree in Statistics, Mathematics, Data Science, Computer Science, or related educational discipline.
- Minimum of five years experience working in a data scientist or analyst in a business intelligence role or equivalent practical knowledge.
- Experience working with SQL and building data visualizations using relevant tools (e.g. Splunk, Tableau, or similar).
- Experience sizing and estimating timelines on analytics initiatives of varying scales.
- Experience with A/B testing methodologies and supporting technologies and platforms.
- Experience working cross-functionally with software engineering, legal, and data warehousing teams.
- Fluency in at least one programming language like Python, R, Scala, Java, Swift, or similar.
- Excellent communication skills and with the ability to distill complex or divergent results with clarity and precision.
Preferred Qualifications
- Experience with machine learning, prompt engineering, and large language models.
- Experience performing analysis on instructional content-driven products and services.
- An understanding of how customers find and consume digital instructional content.
- Working knowledge of content management and delivery systems.
- Experience working with stream processing via Apache Spark, Apache Storm, or Kafka a plus.