LoadRunner, JMeter, or other performance testing tools
Performance monitoring tools such as New Relic, Prometheus, Grafana, or Datadog
Linux
Docker and Kubernetes
GCP and/or AWS
Scripting, such as with Bash, Perl, or Python
Diagnosing bottlenecks in PostgreSQL queries or other relational database queries
Good To Have Skills
Java Web applications that use RabbitMQ
Apache Benchmark (ab)
Cypress
Jenkins
DevWeb (JavaScript)
Should be working independently with client
Ownership
Shoud be very good at communication
Key Responsibilities
Deploy and set up test instances and PostgreSQL databases on-prem on Linux and in the cloud in AWS and GCP leveraging Docker, Kubernetes, and Jenkins and working heavily in the command-line.
Troubleshoot deployment issues, test issues, and product issues.
Develop performance tests using tools such as LoadRunner, DevWeb (JavaScript), Apache Benchmark (ab), and Cypress, and custom tools and scripts written in Python, Perl, Bash, and SQL.
Monitor, analyze, and use whatever it takes to diagnose issues and bottlenecks and gain insights, including but not limited to: test results, Jenkins logs, application logs, PostgreSQL logs, Docker, Kubernetes, NetData, Prometheus, Grafana, New Relic, Datadog, AWS or GCP cloud monitoring, RabbitMQ, custom PostgreSQL queries.
File and track product issues in Jira with sufficient details for developers to reproduce and fix the issues.
Work closely with engineering teams to investigate and mitigate problems found in testing.
Present key findings and actionable insights and guidance to technical and non-technical audiences in clear and meaningful presentations and other communications.
Collaborate with a diverse set of distributed stakeholders, building trust, credibility, and positive relationships.
Write and maintain clear documentation in Confluence.
Verify performance improvements and fixes.
Ensure performance does not regress.
Improve automation so that more time can be spent analyzing measurements rather than just obtaining them.
Take ownership of tasks, demonstrate initiative, communicate proactively, learn from mistakes, and continuously improve.
Key Requirements
Enjoy performance engineering, are good at it, and have 8+ years of dedicated experience doing it, preferably with complex, containerized, distributed products in the cloud.
Strong experience with:
LoadRunner performance testing tools
Linux
Docker and Kubernetes
Understanding and knowledge GCP and/or AWS or any other cloud
Simple scripting, such as with Bash, Perl, or Python
Diagnosing bottlenecks in PostgreSQL queries or other relational database queries
Performance monitoring tools such as AppDynamics, New Relic, Prometheus, Grafana, or Datadog
Are comfortable with ambiguity and venturing into the unknown and accept that failure is part of the job because we’re always pushing our products to discover their limits.
Strong analytical skills with ability to question and find deep meaning in data.
Prefer ownership and growth over handholding and stagnation.
Passionate about learning and growing and committed to delivering high-quality work.
Able to persevere through challenges and unblock self through research, experimentation, and critical thinking.
Value simplification, reusability, knowledge sharing, and standardization over ad hoc heroics.