Data & SQL Developer (Data Quality, Reporting, and Requirements)
Location: Juno Beach, FL
Contract: 1 tear+
Rate: $52-$56+/hr Depending on Experience
We’re hiring a hands-on
SQL-focused developer/analyst
who enjoys building reliable data solutions—not just pulling reports. You’ll write and optimize SQL (PostgreSQL preferred), build repeatable data validation and auditing checks, and deliver clean, trusted datasets that support reporting and lightweight visualization. You’ll partner directly with internal users to clarify ambiguous requests, define “source of truth,” and translate business needs into data logic that holds up under scrutiny.
This role is ideal for someone who is
coder-friendly
(comfortable writing scripts/queries, debugging, and improving processes), detail-oriented, and naturally asks
“why?”
before implementing.
Key Responsibilities
-
Develop and maintain SQL queries, views, and datasets to support reporting and operational use cases.
-
Write and optimize SQL in PostgreSQL (or similar), including multi-table joins, aggregations, window functions, and validation queries.
-
Build repeatable
data quality checks
(duplicates, missing keys, invalid values, stale data) and document root cause + remediation.
-
Support
database updates
and controlled data fixes with traceability (change logs, validation steps, rollback awareness).
-
Create reporting outputs and lightweight visualizations using tools like Excel and/or Power BI (or similar), ensuring results are explainable and verifiable.
-
Translate vague stakeholder requests into clear definitions, requirements, and acceptance criteria before building.
-
Maintain documentation: data definitions, mappings, refresh logic, audit results, and “how to trust this data” notes.
-
Contribute to process improvements around data governance, permissions, and consistent metric definitions.
Required Qualifications
-
Strong SQL experience (PostgreSQL preferred; ability to ramp quickly if coming from SQL Server/Oracle/MySQL).
-
Strong understanding of relational fundamentals: keys, constraints, cardinality, and data integrity.
-
Demonstrated experience debugging data issues (e.g., join row explosions, mismatched totals, missing records) and proving correctness with validation checks.
-
Experience building repeatable data audits/reconciliations and documenting findings and fixes.
-
Comfortable working directly with stakeholders to clarify ambiguity and define success criteria (requirements mindset).
-
Working proficiency with Excel for data validation and reporting (pivots, lookups; Power Query is a plus).
-
Clear communication: able to explain logic and assumptions to technical and non-technical audiences.
Nice-to-Have Qualifications
-
Experience writing scripts for data work (Python preferred; other languages acceptable) to automate checks or repeatable reporting.
-
Familiarity with data visualization / BI tools (Power BI, Tableau, Grafana, etc.).
-
AWS familiarity (RDS Postgres, S3, Athena/Glue, IAM basics) or general cloud data patterns.
-
Engineering/asset-heavy domain familiarity (mechanical/industrial context helpful).
-
Familiarity with time-series data concepts and tools (AVEVA PI / PI Vision).
-
Familiarity with asset management systems (IBM Maximo).
-
Experience using database tools (e.g., Databricks/ MongoDB/ DBeaver) and query performance tuning basics.
What We Value (Working Style)
-
Naturally curious and detail-driven; validates results instead of “hoping it’s right.”
-
Willing to ask hard questions and challenge unclear requirements respectfully.
-
Evidence-based problem solving (reconciliation, repeatable checks, clear documentation).
-
Bias toward maintainable solutions over one-off fixes.
Example Work You Might Do
-
Build a SQL dataset that merges multiple sources while preserving “one row per asset” correctness.
-
Create a recurring audit that flags duplicates, orphan keys, or stale records and logs exceptions.
-
Partner with a stakeholder to define a metric, document assumptions, and deliver a dataset + simple visualization they can trust.
-
Create dynamic dashboards connected to live equipment data linked by multiple data sources that support engineers