Find Your Next AI Adventure
Discover opportunities at the frontier of artificial intelligence. Browse curated positions from leading AI companies.
Featured Opportunities
Software Engineer - Web Crawling
About Tavily Tavily is building the search engine for AI agents. We believe the future of work will be led by AI agents, and that requires restructuring how the web is accessed. Our Search API provides web access for AI agents, enabling real-time internet search optimized for LLMs and Retrieval-Augmented Generation (RAG). We are backed by leading investors and serve developers and enterprises worldwide. Our team is fast-moving and ambitious. We ship quickly, iterate constantly, and care deeply about impact. The Role As a Software Engineer - Web Crawling at Tavily, you’ll build the systems that power how AI interacts with the live web. You’ll design and implement large-scale, intelligent web acquisition pipelines, making sense of an ever-changing online ecosystem and transforming it into structured, high-quality data. This role blends technical understanding of browsers, networks, and automation with hands-on engineering. You’ll work on adaptive systems that navigate modern websites, understand evolving web interaction ,detection systems, and gather meaningful information safely, efficiently, and at scale. Your work will directly shape Tavily’s ability to provide AI agents with accurate, real-time knowledge of the world , bridging the gap between the open web and intelligent reasoning. This position is based in Israel and requires existing legal authorization to work in Israel. Tavily is not able to provide visa sponsorship for this role. What You’ll Do • Build distributed data acquisition systems that capture and structure the live web • Investigate and analyze browser internals, fingerprinting, and anti-automation systems to develop stealthy, adaptive orchestration layers • Prototype and deploy intelligent automation frameworks using Playwright, Puppeteer, and low-level browser control protocols (CDP) • Conduct hands-on research into network flows, JavaScript-based protections, and emerging web standards affecting automation • Collaborate with AI and infrastructure teams to integrate real-time web data into retrieval pipelines and LLM-powered agents • Translate deep technical insights into production-grade components, balancing research freedom with engineering rigor • Continuously evolve Tavily’s capabilities in resilience, speed, and authenticity of web interaction What We’re Looking For 3–5 years of experience as a backend or systems engineer, ideally working with large-scale, distributed, or web-facing infrastructure Strong programming skills in Python or Node.js, with experience in Go or C++ considered a strong plus Proven experience building and maintaining browser automation systems (Playwright, Puppeteer, or CDP) in production environments Solid understanding of browser internals, network protocols, and web interaction mechanisms Experience designing high-performance, resilient systems that handle scale, concurrency, and complex orchestration Strong debugging and analytical skills - able to investigate edge cases, performance bottlenecks, and behavior under dynamic web conditions Comfortable working in a fast-moving environment, collaborating closely with product, AI, and infrastructure teams to ship reliable systems quickly Excellent documentation and communication skills to ensure smooth integration and operational visibility Nice to Have • Familiarity with cloud infrastructure, containerization (Docker), Kubernetes, and CI/CD • Experience integrating AI or retrieval systems into production pipelines Perks & Benefits • A young, open, and inclusive culture where everyone has real impact from day one • The chance to build alongside a fast-moving team at the forefront of agentic AI • A deep-work culture that values curiosity, creativity, and continuous learning
Senior Backend Engineer
About Tavily Tavily is building the search engine for AI agents. We believe the future of work will be led by AI agents, and that requires restructuring how the web is accessed. Our Search API provides web access for AI agents, enabling real-time internet search optimized for LLMs and Retrieval-Augmented Generation (RAG). We are backed by leading investors and serve developers and enterprises worldwide. Our team is fast-moving and ambitious. We ship quickly, iterate constantly, and care deeply about impact. The Role As a Software Engineer - Web Crawling at Tavily, you’ll build the systems that power how AI interacts with the live web. You’ll design and implement large-scale, intelligent web acquisition pipelines, making sense of an ever-changing online ecosystem and transforming it into structured, high-quality data. This role blends technical understanding of browsers, networks, and automation with hands-on engineering. You’ll work on adaptive systems that navigate modern websites, understand evolving web interaction ,detection systems, and gather meaningful information safely, efficiently, and at scale. Your work will directly shape Tavily’s ability to provide AI agents with accurate, real-time knowledge of the world , bridging the gap between the open web and intelligent reasoning. This position is based in Israel and requires existing legal authorization to work in Israel. Tavily is not able to provide visa sponsorship for this role. What You’ll Do • Build distributed data acquisition systems that capture and structure the live web • Investigate and analyze browser internals, fingerprinting, and anti-automation systems to develop stealthy, adaptive orchestration layers • Prototype and deploy intelligent automation frameworks using Playwright, Puppeteer, and low-level browser control protocols (CDP) • Conduct hands-on research into network flows, JavaScript-based protections, and emerging web standards affecting automation • Collaborate with AI and infrastructure teams to integrate real-time web data into retrieval pipelines and LLM-powered agents • Translate deep technical insights into production-grade components, balancing research freedom with engineering rigor • Continuously evolve Tavily’s capabilities in resilience, speed, and authenticity of web interaction What We’re Looking For 3–5 years of experience as a backend or systems engineer, ideally working with large-scale, distributed, or web-facing infrastructure Strong programming skills in Python or Node.js, with experience in Go or C++ considered a strong plus Proven experience building and maintaining browser automation systems (Playwright, Puppeteer, or CDP) in production environments Solid understanding of browser internals, network protocols, and web interaction mechanisms Experience designing high-performance, resilient systems that handle scale, concurrency, and complex orchestration Strong debugging and analytical skills - able to investigate edge cases, performance bottlenecks, and behavior under dynamic web conditions Comfortable working in a fast-moving environment, collaborating closely with product, AI, and infrastructure teams to ship reliable systems quickly Excellent documentation and communication skills to ensure smooth integration and operational visibility Nice to Have • Familiarity with cloud infrastructure, containerization (Docker), Kubernetes, and CI/CD • Experience integrating AI or retrieval systems into production pipelines Perks & Benefits • A young, open, and inclusive culture where everyone has real impact from day one • The chance to build alongside a fast-moving team at the forefront of agentic AI • A deep-work culture that values curiosity, creativity, and continuous learning
1w agoDevelopment Senior Backend Engineer IONIX is a pioneer in EASM, an emerging area in cybersecurity dedicated to helping organizations uncover vulnerabilities in their external attack surface. We’re looking for creative innovators who share our passion for protecting the world from the risks inherent in hyperconnected online attack surfaces. Israel, Tel Aviv About The Position IONIX External Exposure Management protects enterprises’ external attack surface from cyber risks and increases security team efficiency by providing tools that shorten the time to discover and prioritize exposures. IONIX reduces the exploitable attack surface by discovering every internet-facing asset, assessing dependencies and connections, and validating exploitable risks to prioritize remediation of critical, impactful exposures. IONIX reduces alert fatigue, streamlines the process for resolving alerts and ensures that they reach the right team. Global leaders including BlackRock, Infosys, Sompo, The Telegraph and E.ON depend on IONIX for proactive management of their complex and dynamic attack surface Responsibilities What you’ll be doing Take part in developing the world’s leading ecosystem-security solution Address security, scalability, and performance challenges Work closely with other team members, provide technical leadership End-to-end ownership of features, from design through development and deployment Take part in uncovering and resolving critical vulnerabilities for some of the biggest companies out there
1w agoTorq is building the next generation of security automation. As a Cloud FinOps Engineer on our R&D team, you'll own our cloud financial operations and act as the strategic link between engineering, product, and finance. You'll analyze cloud and SaaS spending, identify optimization opportunities (e.g., right-sizing, reserved instances), shape pricing models, implement governance frameworks and build dashboards and alerts to provide clear visibility into cloud spend. You'll collaborate with the Chief Architect, VP R&D and Platform teams to ensure cost efficiency while supporting innovation.
2d agoHiring AI Talent?
Reach thousands of AI professionals
About Tavily - Search API Tavily is a cutting-edge company focused on providing a powerful and efficient Search API. We empower developers and businesses by delivering high-quality, relevant search results quickly and reliably. Join our team to help build the data infrastructure that supports and scales our core product. The Role We are seeking a highly motivated and experienced Data Engineer to join our growing team. You will be responsible for designing, constructing, installing, testing, and maintaining highly scalable data management systems. You will work closely with our engineering and devops teams to build and optimize the data pipelines that are crucial to the performance and accuracy of our Search API. Responsibilities Design, build, and maintain efficient and reliable ETL/ELT pipelines for data warehousing. Develop and improve our data infrastructure. Ensure data quality, integrity, and security across all data platforms. Optimize data systems for performance and scalability. Troubleshoot and resolve issues related to data pipelines and data infrastructure. Collaborate with cross-functional teams to understand data needs and deliver solutions. Minimum Qualifications A degree in Computer Science, Statistics, Engineering, or a related quantitative field. 3+ years of professional experience as a Data Engineer or in a similar role focused on data infrastructure. Proficiency in Python. Solid experience with relational databases (SQL) and NoSQL databases. Experience with AWS and their data services. Proven experience in building and optimizing data pipelines and data architectures. Preferred Qualifications Experience with Mongo/Snowflake/Redis/AWS S3. Experience with big data technologies Experience with Airflow. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Experience working in a company focused on API services or search technology. Knowledge of data governance and data security best practices.
1w agoAbout Tavily We're building the infrastructure layer for agentic web interaction at scale. Our API is designed from the ground up to power Retrieval-Augmented Generation (RAG) and real-time reasoning in AI systems. By connecting LLMs to high-quality, trustworthy web content, we help developers build agents that are not only intelligent - but also informed. We work with some of the most innovative teams in AI - from small startups shaping the ecosystem to the largest enterprises deploying AI at scale. Whether it’s powering sales assistants, research copilots, or internal knowledge tools, we’re the missing link between LLMs and the real world. The Role: Software Engineer We’re looking for a Software Engineer to join our core engineering team and help build the infrastructure that powers real-time AI agents. You’ll work across the stack, ship fast, and take ownership of critical systems as we scale. This is a great role for a generalist who loves building from scratch, thrives in low-process environments, and wants to work on technically ambitious problems. What You'll Do Be the expert in building fast, reliable, and scalable systems for real-time LLM workflows Design and implement backend infrastructure and API endpoints Collaborate closely with product to iterate on features quickly and thoughtfully Improve performance, monitoring, and reliability across the stack Own core systems and contribute to key architectural decisions Help shape a strong engineering culture focused on velocity and quality What You Bring 2+ years of professional software engineering experience Strong backend development skills (Python, Go , C++) Proven experience designing and operating large-scale, distributed systems, with a solid understanding of API design, reliability, and performance at scale Hands-on expertise with AWS infrastructure and cloud-native services, bringing practical knowledge of deploying and managing services in real-world environments Comfortable in a fast-paced startup environment with lots of ownership You have hands-on experience designing and operating high-throughput, low-latency infrastructure, including systems that handle massive concurrency, heavy query loads. Curiosity about LLMs, retrieval and the future of AI systems, with a drive to stay at the forefront of new technology Nice To Have Experience with performance optimization, load testing, and debugging production issues in large-scale systems. Strong attention to system correctness, performance, and reliability, and a drive to continuously refine and improve production systems to perfection. Familiarity with DevOps practices, including CI/CD pipelines, infrastructure as code, Kubernetes orchestration, and modern monitoring tools.
1w agoHover to select • Click to view job details