Data Engineer
Syncron is a leading SaaS company with over 20 years of experience, specializing in aftermarket solutions. Our Service Lifecycle Management Platform offers domain-fit solutions for:
- Supply Chain optimization,
- Pricing strategy,
- Service Fulfillment (e.g. warranty management, field service management, service parts management, knowledge management).
Our company has a global presence with offices in US, UK, Germany, France, Italy, Japan, Poland, India and group headquarters in Sweden.
We build upon the belief that our greatest strength is our People. Our unique company culture has been appreciated by our Employees.
With this we are winning the hearts and minds of world-leading organizations, such as JCB, Kubota, Electrolux, Toyota, Renault and Hitachi.
About the role
- Collaborate with stakeholders to gather requirements and contribute to technical discussions related to data processing, ETL workflows, and pipeline reliability.
- Design, build, and maintain data pipelines using Apache Spark (PySpark), focusing on performance and scalability.
- Develop Python-based services and scripts that ingest, transform, and load data to/from S3 and other data sources.
- Use AWS services like Lambda, S3, and Step Functions to build modular and cost-effective data workflows.
- Maintain and automate CI/CD pipelines using GitHub Actions and deploy workloads to Kubernetes environments.
- Participate in production support, incident resolution, and debugging performance bottlenecks.
- Write clean, testable code and maintain good documentation for team visibility and handoffs.
- Contribute to design documents and team discussions, especially around optimization and cost awareness.
What would you do?
- Experience: 5–7 years of total experience, with 3–4 years focused on Python-based data engineering.
Tech Skills:
- Strong programming skills in Python (Pandas, PySpark, boto3).
- Experience with Spark (PySpark) and AWS services like S3, Lambda.
- Good understanding of SQL and NoSQL storage options.
- Exposure to Kubernetes and deployment via Helm or CDK is a plus.
- Familiar with Git, GitHub Actions or similar CI/CD tools.
Mindset:
- Willing to learn and work hands-on with cloud-native tools.
- Able to work independently and within a small team.
- Strong problem-solving and debugging mindset.
- Good communication skills and documentation habits.
Nice to have (not mandatory):
- Familiarity with event-driven architectures (Kafka, SQS, or EventBridge).
- Exposure to Istio, Knative, or other service mesh tools.
- Understanding of data mesh principles or multi-tenant SaaS architectures.
- Basic knowledge of observability practices (CloudWatch, Prometheus, OpenTelemetry).
Unsure if you meet all the job requirements but passionate about the role? Apply anyway! Syncron values diversity and welcomes all Candidates, even those with non-traditional backgrounds. We believe in transferable skills and a shared passion for success!
#LI-SYNCRON
#LI-Remote
#LI-Hybrid
- Department
- Products
- Locations
- Bengaluru
- Remote status
- Hybrid
- Employment type
- Full-time
Respect. Flexibility. Growth.
At Syncron, we’re not just shaping the future of service lifecycle management - we’re also cultivating a dynamic and innovative community of thinkers, doers and visionaries passionate about making a difference.
Here, your voice is heard, and your potential has no limits.

The world is changing. Manufacturing companies are shifting from selling products to delivering services. And we are driving this transformation together with our Customers, by helping them reduce costs and manual processes. We are guiding them on their journey towards a fully connected service experience and making their brand stronger.
Our goal: to make the complex simple.
Visit syncron.com to get to know us better!
Data Engineer
Loading application form
Already working at Syncron?
Let’s recruit together and find your next colleague.