Slide background
Join kWh Analytics in the fight against climate change.

About Us:

    kWh Analytics is building the industry’s largest independent data repository of solar asset performance. Our software helps clients like Google efficiently manage their solar portfolios and intelligently direct investment to where it can do the most good. We are a growth-stage startup backed by both venture capital and the US Department of Energy. We are seeking mission-driven team players who can join us at our office in downtown San Francisco.

Open positions:  

Data Engineer (Back-End)

RESPONSIBILITIES:

Everything kWh does depends on the quality and quantity of solar data in our vault. As a member of our 4 person Data Vault team, you would be leading this charge by…

  • Parsing operational and financial data from remote APIs and client data file dumps in a variety of formats, and integrating it into the vault
  • Writing code to output cleaned and summarized data in the form needed for data science, statistics, data visualizations, and front-end website features
  • Optimizing queries for good performance on very large data sets
  • Designing database schema, or redesigning them to meet changing needs
  • Debugging, automating, maintaining, testing, and documenting the data pipeline
  • Working closely with a product manager on a small team to prioritize features and estimate implementation times

WHO WE’RE LOOKING FOR:

  • Bachelor’s degree in computer science or equivalent real-world experience
  • You should have the curiosity and drive to constantly learn new languages, skills, and tools
  • You should thrive in a fast-paced, dynamic environment as kWh adapts to a rapidly growing and changing solar industry
  • If you aren’t already familiar with the solar industry, you should be excited to learn about it
  • You should be dedicated to quality, craftsmanship, truth and accuracy in reporting

REQUIRED SKILLS:

  • Excellent Python skills are a must
  • Fluency with SQL (We use Postgres, but other flavors are fine)
  • Fluency with Linux server command line environment

NICE TO HAVE:

  • Experience scripting and automating a data pipeline, using Python, shell script, or equivalent
  • Experience parsing and cleaning gnarly data sets, and converting data to/from CSV, JSON, YAML, etc.
  • Experience with Amazon Web Services
  • Experience using Pandas

Please send your resume to people@kwhanalytics.com. If possible, please include a code sample or link to a code sample (e.g. a GitHub repo) that represents work you’re proud of.