Researchers at DeepSeek released a new experimental model called V3.2-exp. This model is designed to have dramatically lower inference costs when used in long-context operations. DeepSeek announced the model with a post and also shared a linked academic paper.
The most important feature of the new model is called DeepSeek Sparse Attention. This is an intricate system that uses a module called a lightning indexer to prioritize specific excerpts from the context window. After that, a separate system called a fine-grained token selection system chooses specific tokens from within those excerpts to load into the module’s limited attention window. Taken together, they allow the Sparse Attention models to operate over long portions of context with comparatively small server loads.
For long-context operations, the benefits of the system are significant. Preliminary testing by DeepSeek found that the price of a simple API call could be reduced by as much as half in long-context situations. Further testing will be required to build a more robust assessment. Because the model is open-weight and freely available, it will not be long before third-party tests can assess the claims made in the paper.
DeepSeek’s new model is one of a string of recent breakthroughs tackling the problem of inference costs. These are the server costs of operating a pre-trained AI model, which is distinct from the cost of training it. In DeepSeek’s case, the researchers were looking for ways to make the fundamental transformer architecture operate more efficiently. They are finding that there are significant improvements to be made.
Based in China, DeepSeek has been an unusual figure in the AI boom. This is particularly true for those who view AI research as a nationalist struggle between the U.S. and China. The company made waves at the beginning of the year with its R1 model. That model was trained using primarily reinforcement learning at a far lower cost than its American competitors. However, the model has not sparked a wholesale revolution in AI training, as some predicted. The company has receded from the spotlight in the months since.
The new sparse attention approach is unlikely to produce the same uproar as the R1 model. But it could still teach U.S. providers some much needed tricks to help keep inference costs low.

