SambaNova Systems, maker of dedicated AI hardware and software systems, has launched a new AI chip, the SN40, that will be used in the company’s full-stack large language model (LLM) platform, the SambaNova Suite.
First introduced in March, the SambaNova Suite uses custom processors and operating systems for AI inference training. It’s designed to be an alternative to power-hungry and expensive GPUs.
To upgrade the hardware so soon after launch means that there ought to be a big jump in performance, and there is. The SN40L serves up to a 5 trillion parameter LLM with 256K+ sequence length possible on a single system node, according to the vendor.
Remember to like ourfacebookand our twitter @WindowsModefor a chance to win a free Surface Pro every month!