Quick Telecast
Expect News First

AMD-Friendly AI LLM Developer Jokes About Nvidia GPU Shortages

0 89


The co-founder and CEO of Lamini, an artificial intelligence (AI) large language model (LLM) startup, posted a video to Twitter/X poking fun at the ongoing Nvidia GPU shortage. The Lamini boss is quite smug at the moment, and this seems to be largely because the firm’s LLM runs exclusively on readily available AMD GPU architectures. Moreover, the firm claims that AMD GPUs using ROCm have reached “software parity” with the previously dominant Nvidia CUDA platform.

See more

The video shows Sharon Zhou, CEO of Lamini, checking an oven in search of some AI LLM accelerating GPUs. First she ventures into a kitchen, superficially similar to Jensen Huang’s famous Californian coquina, but upon checking the oven she notes that there is “52 weeks lead time – not ready.” Frustrated, Zhou checks the grill in the yard, and there is a freshly BBQed AMD Instinct GPU ready for the taking.

(Image credit: Lamini)

We don’t know the technical reasons why Nvidia GPUs require lengthy oven cooking while AMD GPUs can be prepared on a grill. Hopefully, our readers can shine some light on this semiconductor conundrum in the comments.

On a more serious note, if we look more closely at Lamini, the headlining LLM startup, we can see they are no joke. CRN provided some background coverage of the Palo Alto, Calif.-based startup on Tuesday. Some of the important things mentioned in the coverage include the fact that Lamini CEO Sharon Zhou is a machine learning expert, and CTO Greg Diamos is a former Nvidia CUDA software architect.

Lamini LLM acceleration

(Image credit: Lamini)

It turns out that Lamini has been “secretly” running LLMs on AMD Instinct GPUs for the past year, with a number of enterprises benefitting from private LLMs during the testing period. The most notable Lamini customer is probably AMD, who “deployed Lamini in our internal Kubernetes cluster with AMD Instinct GPUs, and are using finetuning to create models that are trained on AMD code base across multiple components for specific developer tasks.”

A very interesting key claim from Lamini is that it only needs “3 lines of code,” to run production-ready LLMs on AMD Instinct GPUs. Additionally, Lamini is said to have the key advantage of working on readily available AMD GPUs. CTO Diamos also asserts that Lamini’s performance isn’t overshadowed by Nvidia solutions, as AMD ROCm has achieved “software parity” with Nvidia CUDA for LLMs.




The co-founder and CEO of Lamini, an artificial intelligence (AI) large language model (LLM) startup, posted a video to Twitter/X poking fun at the ongoing Nvidia GPU shortage. The Lamini boss is quite smug at the moment, and this seems to be largely because the firm’s LLM runs exclusively on readily available AMD GPU architectures. Moreover, the firm claims that AMD GPUs using ROCm have reached “software parity” with the previously dominant Nvidia CUDA platform.

See more

The video shows Sharon Zhou, CEO of Lamini, checking an oven in search of some AI LLM accelerating GPUs. First she ventures into a kitchen, superficially similar to Jensen Huang’s famous Californian coquina, but upon checking the oven she notes that there is “52 weeks lead time – not ready.” Frustrated, Zhou checks the grill in the yard, and there is a freshly BBQed AMD Instinct GPU ready for the taking.

(Image credit: Lamini)

We don’t know the technical reasons why Nvidia GPUs require lengthy oven cooking while AMD GPUs can be prepared on a grill. Hopefully, our readers can shine some light on this semiconductor conundrum in the comments.

On a more serious note, if we look more closely at Lamini, the headlining LLM startup, we can see they are no joke. CRN provided some background coverage of the Palo Alto, Calif.-based startup on Tuesday. Some of the important things mentioned in the coverage include the fact that Lamini CEO Sharon Zhou is a machine learning expert, and CTO Greg Diamos is a former Nvidia CUDA software architect.

Lamini LLM acceleration

(Image credit: Lamini)

It turns out that Lamini has been “secretly” running LLMs on AMD Instinct GPUs for the past year, with a number of enterprises benefitting from private LLMs during the testing period. The most notable Lamini customer is probably AMD, who “deployed Lamini in our internal Kubernetes cluster with AMD Instinct GPUs, and are using finetuning to create models that are trained on AMD code base across multiple components for specific developer tasks.”

A very interesting key claim from Lamini is that it only needs “3 lines of code,” to run production-ready LLMs on AMD Instinct GPUs. Additionally, Lamini is said to have the key advantage of working on readily available AMD GPUs. CTO Diamos also asserts that Lamini’s performance isn’t overshadowed by Nvidia solutions, as AMD ROCm has achieved “software parity” with Nvidia CUDA for LLMs.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Quick Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

buy kamagra buy kamagra online