DATA ENGINEER
GrowByData was founded by powerhouses in big data analytics and SaaS, who have leveraged the power of global operations for decades. We help early-to-growth-stage companies use data to improve margins, delight customers, and accelerate revenue growth.
Details / requirements:
Are you interested in having your products be used by 1000s of brands globally? Does building a global brand excite you? Does having your solutions be used by the world’s top brand get you up from bed? GrowByData is on a mission to expand its distribution to the world’s retailers and is seeking energetic, creative, and self-driven individuals to join the challenge. We are looking for a motivated, and talented individual to join us as Data Engineer to deliver our enterprise platform! If the challenge excites you, please apply with your CV, and 3 reasons why this excited you.
TITLE: DATA ENGINEER
REQUIREMENTS
- Excellent problem solver with very good attention to detail
- Expertise in at least one popular Python web framework like Django, Flask, Falcon
- Experience with building and deploying RESTful APIs.
- Experience with handling large datasets using in-memory data processing libraries like Pandas and NumPy.
- Experience building multi-threaded application.
- Experience with relational databases as well as NoSQL technologies like MongoDB, Cassandra, DynamoDB, etc.
- Knowledge of object-relational mapping (ORM)
- Experience with informational retrieval, data extraction, and analytics-based solution is preferred.
- Experience with Unix/Linux systems and cloud-based infrastructure and platform services
- Experience deploying software in mixed language and mixed platform environments.
- Ability to express ideas clearly within the team and across other groups.
- Some experience or knowledge of building/maintaining data pipelines will be a great plus.
- Some experience or knowledge of Big data technologies like Spark, Hive, Pig, MapReduce will be a great plus.
- Some experience managing infrastructure in the cloud.
- Some knowledge of machine learning and related technologies is a great plus.
- Prior knowledge in Amazon Redshift is preferred
RESPONSIBILITIES
- Design, build, and maintain efficient, reusable, and reliable code for developing next generation data infrastructure.
- Develop efficient data pipeline for large scale data extraction and processing framework.
- Ensure highest level of commitment for best possible performance, quality, and responsiveness of applications.
- Identify bottlenecks and bugs, and devise solutions to mitigate and address the issues.
- Help maintain code quality, organization, and automation.
- A cultivated commitment to testing every aspect of the code.
- Should provide support in all phases of SDLC and ensure to deliver high-quality products.
- Work closely with our data team to integrate your amazing innovations and algorithms to our production systems.
- Support the client-support team when needed with technical or feature aspects.
- Collaborate with data science research team on creating and evolving data formats to be flexible for scalable technology.
- Maintain strict confidentiality of your work.
- Communicate pro-actively and effectively with team members, leads, management, and clients where necessary.
- Assist in creating a friendly, entrepreneurial and A+ corporate culture in Nepal.
EDUCATION
- Minimum Bachelor’s degree in Engineering – Computer Science or equivalent
EXPERIENCE
- Prior 2 yearsof working experience in the above- mentioned domain would be preferred and favorable.
Email inquiries@growbydata.com to be considered immediately.
Overview
Category | Database Programming, Engineering - Computer |
Openings | 1 |
Position Type | Full Time |
Experience | 2+ years |
Education | B.E. in Computer Science, Bachelors in Computer Science |
Posted Date | 02 Jun, 2021 |
Apply Before | 01 Jul, 2021 |
City | Lalitpur |