1

Groq

Groq develops custom AI accelerators, including LPUs, to optimise AI workloads, providing cloud and on-premise solutions.

Categories

Technology  
Groq
Leadership team

Jonathan Ross  (CEO & Founder)

Claire Hart  (Chief Legal Officer)

Allison Hopkins  (Chief Talent Officer)

Sunny Madra  (COO, President, GTM, Operations & Supply Chain)

Mohsen Moazami  (President of International)

Ian Andrews  (Chief Revenue Officer)

Chelsey Susin Kantor  (Chief Marketing Officer)

Industries

Technology

Products/ Services
Language Processing Unit (LPU), GroqCloud™ platform, GroqRack™ Cluster, Developer Tools, AI Inference Services
Number of Employees
100 - 500
Headquarters
400 Castro St, Suite 600, Mountain View, CA 94041, USA
Established
2016
Company Type
Private company limited by shares or Ltd
Revenue
5M - 20M
Social Media
Overview
Location
Summary

Groq, Inc. is an American artificial intelligence (AI) company founded in 2016 by Jonathan Ross and a team of former Google engineers. Based in Mountain View, California, the company specialises in AI accelerators, particularly through its application-specific integrated circuit (ASIC) known as the Language Processing Unit (LPU). The LPU is designed to accelerate AI workloads, including large language models (LLMs), image classification, anomaly detection, and predictive analysis. The company’s hardware optimises performance, energy efficiency, and execution speed, making it particularly effective for AI inference tasks.
 

Groq raised seed funding in 2017, led by Chamath Palihapitiya of Social Capital, and later secured significant investments, including a $300 million Series C round in 2021. With additional funding from investors such as Tiger Global Management and D1 Capital Partners, Groq achieved a valuation of over $1 billion, becoming a unicorn. In 2024, the company raised $640 million in a Series D round, bringing its valuation to $2.8 billion.
 

The company has also been expanding its product offerings. In 2021, Groq acquired Maxeler Technologies, strengthening its capabilities in dataflow systems. In 2024, it launched GroqCloud, a developer platform designed to allow developers to access Groq's chips and APIs for cloud-based AI work. Groq also provides on-premise solutions through its GroqRack compute clusters for enterprises looking for localised AI infrastructure.
 

The LPU is the heart of Groq’s technology, designed to maximise performance for AI applications by improving memory and computation efficiency. Its deterministic architecture ensures precise control over hardware, enhancing overall system performance. Groq’s products are built for scalability and affordability, with a focus on reducing energy consumption. The company continues to expand globally and is a significant player in the AI hardware space, providing tools that support next-generation AI models.

History

Groq, Inc. is an American artificial intelligence (AI) company founded in 2016 by Jonathan Ross and a team of former Google engineers. The company is headquartered in Mountain View, California, and specialises in developing application-specific integrated circuits (ASICs) designed to accelerate AI inference workloads. These chips, known as Language Processing Units (LPUs), are tailored for tasks such as large language models (LLMs), image classification, anomaly detection, and predictive analysis.
 

Before founding Groq, Jonathan Ross initiated the development of Google's Tensor Processing Unit (TPU) as a 20% project, which later became a significant part of Google's AI infrastructure. This experience laid the foundation for Groq's focus on specialised hardware for AI applications.
 

In 2017, Groq secured seed funding from Chamath Palihapitiya of Social Capital, receiving a $10 million investment. The company continued to attract investment, raising $300 million in a Series C funding round in April 2021, co-led by Tiger Global Management and D1 Capital Partners. This round brought Groq's total funding to approximately $367 million and valued the company at over $1 billion, marking its entry into the "unicorn" status.
 

In March 2022, Groq acquired Maxeler Technologies, a company known for its dataflow systems technologies. This acquisition enhanced Groq's capabilities in high-performance computing and machine learning, further strengthening its position in the AI hardware market.


A significant development occurred in August 2023 when Groq selected Samsung Electronics' foundry in Taylor, Texas, to manufacture its next-generation chips using Samsung's 4-nanometer process node. This collaboration marked the first order at Samsung's new chip factory and was a crucial step in advancing Groq's hardware capabilities.
 

In March 2024, Groq launched GroqCloud, a developer platform that provides access to Groq's LPUs via an API, enabling developers to run AI workloads in the cloud. This move aimed to democratise access to high-performance AI inference capabilities.
 

Later that year, in August 2024, Groq raised an additional $640 million in a Series D funding round led by BlackRock Private Equity Partners, valuing the company at $2.8 billion. This funding was intended to expand Groq's infrastructure and support the growing demand for its AI inference solutions.
 

In February 2025, Groq secured a $1.5 billion commitment from the Kingdom of Saudi Arabia to expand its AI infrastructure in the region. This partnership included a deal with Aramco Digital to build a critical AI hub, highlighting Groq's international expansion efforts.
 

As of 2025, Groq continues to be a prominent player in the AI hardware sector, focusing on delivering high-performance, energy-efficient solutions for AI inference tasks. The company's LPUs and GroqCloud platform are utilised by developers and enterprises seeking to accelerate their AI applications.

Mission

Groq’s mission is to deliver high-performance, affordable, and energy-efficient AI solutions by designing specialised hardware for AI inference. The company aims to optimise AI workloads, such as large language models, image classification, and predictive analysis, using its custom-built Language Processing Units (LPUs). Groq is committed to making AI accessible to all, offering scalable solutions through its cloud platform, GroqCloud, and on-premise compute clusters. The company strives to enable developers and enterprises to deploy AI applications faster and more efficiently, supporting innovation across various industries by providing reliable, low-cost infrastructure for advanced AI models.

Vision

Groq’s vision is to become a global leader in AI infrastructure by creating groundbreaking technology that drives the future of artificial intelligence. The company aims to provide fast, efficient, and scalable solutions for AI inference, helping developers and enterprises unlock the full potential of their AI models. Groq envisions transforming how AI is deployed, offering both cloud and on-premise options to meet diverse business needs. By continuously advancing its technology, Groq strives to ensure that high-performance AI is affordable and accessible to all, supporting the growth of AI-driven innovation and productivity worldwide.

Recognition and Awards

Groq has achieved significant recognition in the AI industry for its innovative AI hardware and solutions. The company’s custom-designed Language Processing Units (LPUs) have set new benchmarks for performance, energy efficiency, and scalability in AI inference. In 2021, Groq raised $300 million in Series C funding, propelling it into the "unicorn" status with a valuation of over $1 billion. In 2024, Groq secured an additional $640 million in Series D funding, bringing its valuation to $2.8 billion. The company’s contributions to AI infrastructure have positioned it as a key player in accelerating AI adoption and enabling high-performance workloads.

Products and Services

Groq, Inc. offers a range of advanced products and services designed to accelerate artificial intelligence (AI) workloads, focusing primarily on high-performance, energy-efficient AI inference. The company’s core products are based around its custom-built hardware, which includes the Language Processing Unit (LPU) and its cloud and on-premise platforms for AI deployment.
 

The Language Processing Unit (LPU) is Groq’s flagship product. It is a specialised application-specific integrated circuit (ASIC) designed to handle AI inference tasks, such as processing large language models (LLMs), image classification, anomaly detection, and predictive analysis. Unlike general-purpose processors, the LPU is optimised for the computational demands of AI applications, providing much higher performance, energy efficiency, and scalability. 

 

The first generation, LPU v1, was developed on a 14nm process node, and it achieved a computational density of more than 1 tera operation per second per square millimetre of silicon. The second generation, LPU v2, will be manufactured on Samsung’s 4nm process node, allowing for even better performance and power efficiency. The LPU is designed to be deterministic, meaning that it can execute tasks with precise control over hardware components, ensuring optimal performance.
 

Groq’s GroqCloud™ platform is another key product offering. It provides an easy-to-use cloud infrastructure for running AI workloads at scale. Through GroqCloud, developers can access the company’s LPUs via an API and execute AI models in the cloud. This platform allows businesses to quickly deploy and scale AI applications without having to manage their own physical hardware.

 

GroqCloud is designed to offer fast, reliable, and affordable AI inference, supporting everything from small-scale trials to large enterprise deployments. The platform is ideal for businesses that need on-demand access to high-performance AI hardware without the upfront costs of purchasing and maintaining physical servers.
 

In addition to the cloud platform, Groq also offers the GroqRack™ Cluster, which is an on-premise solution for businesses that need to deploy AI in their own data centres or AI compute centres. This product allows companies to take advantage of Groq’s high-performance LPUs in their own environment, providing them with full control over their AI infrastructure.

 

The GroqRack Cluster is particularly suited to large enterprises that have specific security or compliance requirements and need to keep their data and workloads within their own facilities. It offers the same performance and energy efficiency as GroqCloud but with the flexibility and control of on-premise deployment.
 

Groq’s developer tools are designed to make it easy for developers to integrate the company’s hardware and software into their AI projects. The company provides comprehensive documentation, APIs, and SDKs to ensure that developers can effectively use the LPU and GroqCloud for their specific use cases. With these tools, developers can build, test, and optimise AI models, whether they are working with small models or large-scale systems that require extensive computational power.
 

Groq’s AI inference services cater to a wide range of industries, from healthcare and finance to autonomous systems and natural language processing. These services are built around the company’s powerful hardware and cloud infrastructure, providing businesses with the resources they need to deploy AI at scale. Whether it’s through GroqCloud or on-premise solutions, the company’s offerings enable businesses to run their AI models more efficiently, reduce costs, and accelerate time-to-market.

References

Dive deeper into fresh insights across Business, Industry Leaders and Influencers, Organizations, Education, and Investors for a comprehensive view.

Groq
Leadership team

Jonathan Ross  (CEO & Founder)

Claire Hart  (Chief Legal Officer)

Allison Hopkins  (Chief Talent Officer)

Sunny Madra  (COO, President, GTM, Operations & Supply Chain)

Mohsen Moazami  (President of International)

Ian Andrews  (Chief Revenue Officer)

Chelsey Susin Kantor  (Chief Marketing Officer)

Industries

Technology

Products/ Services
Language Processing Unit (LPU), GroqCloud™ platform, GroqRack™ Cluster, Developer Tools, AI Inference Services
Number of Employees
100 - 500
Headquarters
400 Castro St, Suite 600, Mountain View, CA 94041, USA
Established
2016
Company Type
Private company limited by shares or Ltd
Revenue
5M - 20M
Social Media