By contacting us you agree with the storage and handling of your data by this website.
Department: SOC - Excellence
Responsibilities:
- Design and develop data solutions: Architect, implement, and optimize data solutions using Elasticsearch/OpenSearch, integrating with various data sources and systems.
- Machine Learning Integration: Apply your expertise in machine learning to develop models, algorithms, and pipelines for data analysis, prediction, and anomaly detection within Elasticsearch/OpenSearch environments.
- Data Ingestion and Transformation: Design and implement data ingestion pipelines to collect, cleanse, and transform data from diverse sources, ensuring data quality and integrity.
- Elasticsearch/OpenSearch Administration: Manage and administer Elasticsearch/OpenSearch clusters, including configuration, performance tuning, index optimization, and monitoring.
- Query Optimization: Optimize complex queries and search operations in Elasticsearch/OpenSearch to ensure efficient and accurate retrieval of data.
- Troubleshooting and Performance Tuning: Identify and resolve issues related to Elasticsearch/OpenSearch performance, scalability, and reliability, working closely with DevOps and Infrastructure teams.
- Collaboration and Communication: Collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders, to understand requirements and deliver effective data solutions.
- Documentation and Best Practices: Document technical designs, processes, and best practices related to Elasticsearch/OpenSearch and machine learning integration. Provide guidance and mentorship to junior team members.
Requirements:
- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
- Strong experience in designing, implementing, and managing large-scale Elasticsearch/OpenSearch clusters, including experience with indexing, search queries, performance tuning, and troubleshooting.
- Expertise in machine learning techniques and frameworks, such as TensorFlow, PyTorch, or scikit-learn, with hands-on experience in developing ML models and integrating them into data pipelines.
- Proficiency in programming languages like Python, Java, or Scala, and experience with data processing frameworks (e.g., Apache Spark) and distributed computing.
- Solid understanding of data engineering concepts, including data modeling, ETL processes, data warehousing, and data integration.
- Experience with cloud platforms like AWS, Azure, or GCP, and knowledge of containerization technologies (e.g., Docker, Kubernetes) is highly desirable.
- Strong analytical and problem-solving skills, with the ability to work effectively in a fastpaced, collaborative environment.
- Excellent communication skills, with the ability to translate complex technical concepts into clear and concise explanations for both technical and non-technical stakeholders.
- Proven track record of successfully delivering data engineering projects on time and within budget.