JEN - 8NF305
Our client is a leading enterprise technology company founded in San Francisco and well-funded by Bain & Venrock, which is dedicated to helping organizations protect and secure their sensitive data. Their product affects the consumer’s data that they allow businesses to manage and store everyday. It prevents hackers from being able to access your data, data leakage, and provides protection across the organization’s cloud systems. Via machine learning, their product allows businesses to discover, classify, and protect their data.
- Writing clean, high-quality, high-performance, maintainable code
- Develop and support software including applications, database integration, interfaces, and new functionality enhancements
- Building highly-available and secure authentication and API services
- Coordinate cross-functionally to insure project meets business objectives and compliance standards
- Support test and deployment of new products and features
- Participate in code reviews
- Have built or worked on production applications
- Enjoy understanding our users and what would make their day to day processes easier to manage
- Love shipping features that are immediately used by our customers
- Ability to decompose complex business problems and lead a team in solving them
- Seek to iterate on new products based on customer feedback
- 4+ years of experience
Tools We Use:
Brownie Points if you have the following XP:
- Expertise in one or more systems/high-level programming language (e.g. Python, Go, Java, C++) and the eagerness to learn more.
- Experience running scalable (thousands of RPS) and reliable (three 9’s) systems.
- Experience with developing complex software systems scaling to substantial data volumes or millions of users with production quality deployment, monitoring and reliability.
- Experience with large-scale distributed storage and database systems (SQL or NoSQL, e.g. MySQL, Cassandra)
- Data Processing - experience with building and maintaining large scale and/or real-time complex data processing pipelines using Kafka, Hadoop, Hive, Storm, or Zookeeper