Senior Software Engineer - Data Platform
GrowByData was founded by powerhouses in big data analytics and SaaS, who have leveraged the power of global operations for decades. We help early-to-growth-stage companies use data to improve margins, delight customers, and accelerate revenue growth.
Details / requirements:
GrowByData is looking for a Senior Software Engineer - Data Platform - who combines deep expertise in cloud infrastructure, data engineering, and backend development. This role requires a versatile professional who can architect and implement end-to-end cloud solutions, build robust data pipelines and processing workflows, and develop high-performance APIs. You'll work with our AWS and GCP infrastructure, manage dynamic workflow orchestration, optimize our data processing systems, and create backend services that power our reporting and analytics capabilities.
Responsibilities:
- Design and implement cloud infrastructure using AWS (EC2, ECS, Lambda, S3, Redshift, IAM, VPC) and GCP (GCS, BigQuery)
- Build and maintain data pipelines using Airflow dynamic DAGs, transforming and normalizing various data structures into database-ready formats
- Develop and maintain high-performance REST APIs using FastAPI for data access and reporting functionalities
- Write optimized SQL queries for complex reporting requirements and integrate them into API endpoints
- Manage programmatic EC2 spot instance orchestration for cost-effective job execution and workflow automation through Jenkins
- Architect data flows from ingestion through S3 to Redshift, and cross-cloud replication to GCS/BigQuery
- Deploy, monitor, and optimize containerized applications and microservices (Docker, ECS)
- Identify and resolve infrastructure bottlenecks and performance issues across the entire data processing pipeline and backend services
- Mentor junior engineers on cloud architecture, data engineering, backend development, and workflow orchestration best practices
- Collaborate with product architects, stakeholders, and client-support teams to deliver scalable solutions on time
- Maintain strict confidentiality and help build an A+ corporate culture
Requirements:
- Strong proficiency in Python as primary development language for data transformation (Pandas), infrastructure automation, and backend development (FastAPI)
- Extensive experience developing REST APIs with FastAPI and writing complex, optimized SQL queries for reporting
- Strong SQL expertise for query optimization, data modeling, and working with Redshift, BigQuery
- Extensive experience with AWS compute and storage services (EC2, ECS, Lambda, S3, Redshift) and GCP (GCS, BigQuery)
- Strong understanding of cloud networking (VPC, subnets, security groups) and IAM/security best practices
- Strong hands-on experience with Apache Airflow, particularly dynamic DAG creation and workflow orchestration
- Proven experience with CI/CD and automated job scheduling
- Experience programmatically managing cloud resources (spinning up/down instances, cost optimization)
- Strong knowledge of data transformation, normalization, and multi-cloud data warehouse architecture
- Proficiency with Docker, containerization, and Infrastructure as Code (Terraform/CloudFormation)
- Experience with NoSQL databases (MongoDB, Elasticsearch)
- Strong understanding of distributed computing and cloud architecture best practices
- Proficiency with Git, Linux ecosystem, and Agile development processes
- Strong analytical and problem-solving skills with practical approach in fast-paced environments
- Bash scripting and makefile experience (Nice to have)
Nice-to-Have:
- LLMs & AI-Enabled Systems
- Experience integrating LLM APIs (e.g., OpenAI, Anthropic, Hugging Face) into backend services
- Familiarity with prompt engineering, embeddings, and token/cost optimization strategies
- Experience building LLM-powered features, such as:
- Natural-language-to-SQL or analytics querying
- Data summarization, classification, or enrichment
- Semantic search or retrieval-augmented generation (RAG)
- Experience managing vector data pipelines and working with vector stores (e.g., OpenSearch, Pinecone, Weaviate, FAISS)
Qualifications & Experience:
A Bachelor of Engineering degree is required. Candidates with four or more years of relevant experience will be considered highly advantageous.
Location: GrowByData (Kathmandu)
The right candidate will be looking for not just a new job, but a stellar career with our growing company. If you want to leave your mark on a new endeavor and really take ownership of what will be the driving force of a successful company, then let us know you want to become a part of our team! Email inquiries@growbydata.com to be considered immediately.
Overview
| Category | Engineering - Software |
| Openings | 1 |
| Position Type | Full Time |
| Experience | 4+ years |
| Education | Bachelor's degree in a related field |
| Posted Date | 07 Jan, 2026 |
| Apply Before | 05 Feb, 2026 |
| City | Lalitpur |