kWh Analytics is looking for engineers and data scientists to join the fight against climate change.
kWh Analytics is making solar affordable for all by attacking solar’s biggest problem: The high cost of capital. We build software that helps solar investors to efficiently manage portfolio data and intelligently direct investment to where it can do the most good. By building the solar industry’s largest independent data repository of solar asset performance, we help our clients (like Google) to be the smartest investors in the industry. We’re a funded, growth-stage startup located in downtown San Francisco backed by both venture capital and the US Department of Energy.
What we’re looking for:
- An Eye for Quality: Quality code is important to you. Our team is always striving to write code that is more reliable, flexible and reusable. You should have a good knowledge of programming best practices, but always be open to learn new things from your coworkers.
- Ready to Roll: As a young start-up, we need teammates that can contribute from day one and learn quickly. Experience in our software stack is preferred, but a broad variety of skills will be important for filling in knowledge gaps.
- Team Player: We’re a growing team, and we work collaboratively on all aspects of our projects. Good teamwork and respect for your peers is essential.
- Generalist: We work with a number of languages and frameworks to achieve our goals. We value general software knowledge and the ability to learn new technologies. Knowledge of the latest architectural trends and the ability to change courses quickly is important.
- Local: We’re looking for a team member that will join us at our office in San Francisco.
- Solar or Finance expertise a plus: We build tools for the solar investment community, so a background in either is nice to have.
Back-End Engineer I/II (Data Engineer)
- Everything kWh does depends on the quality and quantity of solar data in our data vault. You’ll be in charge of parsing operational and financial data from remote APIs and client data file dumps, in a variety of formats, and integrating it into the vault.
- Writing code to output cleaned and summarized data in the form needed for data science, statistics, data visualizations, and front-end website features.
- Optimizing queries for good performance on very large data sets.
- Designing database schema, or redesigning them to meet changing needs.
- Debug, automate, maintain, test, and document the data pipeline.
- Working closely with a product manager on a small team to prioritize features and estimate implementation times.
- Excellent Python skills are a must.
- Fluency with SQL (We use Postgres, but other flavors are acceptable).
- Fluency with Linux server command line environment.
Nice to have:
- Experience using Pandas.
- Experience with Amazon Web Services.
- Experience using ORMs and schema migration scripts (we use SQLAlchemy/Alembic).
- Experience scripting and automating a data pipeline, using Python, shell script, or equivalent.
- Experience parsing and cleaning gnarly data sets, and converting data to/from CSV, JSON, YAML, etc.
Who we’re looking for:
- Bachelor’s degree in computer science or equivalent real-world experience.
- At least 2 years of professional experience using Python and SQL.
- You should have the curiosity and drive to constantly learn new languages, skills, and tools.
- You should thrive in a fast-paced, dynamic environment as kWhA adapts to a rapidly growing and changing solar industry.
- If you aren’t already familiar with the solar industry, you should be excited to learn about it.
- You should be dedicated to quality, craftsmanship, truth and accuracy in reporting.
Please send your resume to firstname.lastname@example.org. If possible, please include a code sample or link to a code sample (e.g. a GitHub repo) that represents work you’re proud of.