Tether Launches AI Training Framework for Smartphones and Consumer GPUs

Tether has announced a new AI training framework that allows fine-tuning large language models directly on consumer devices such as smartphones and non-Nvidia GPUs. This system, part of the QVAC platform, leverages Microsoft’s BitNet architecture combined with LoRA techniques to significantly reduce memory requirements and computational costs.

According to Tether, the framework supports multiple platforms and is compatible with chips from AMD, Intel, Apple Silicon, and Qualcomm mobile GPUs. Engineers can fine-tune models with up to 1 billion parameters on smartphones in under two hours, and even scale up to 13 billion parameters on mobile devices.

BitNet technology reduces VRAM usage by up to 77.8% compared to 16-bit models and accelerates inference on mobile GPUs. Tether also emphasizes potential applications such as federated learning, reducing reliance on the cloud.

This move reflects a trend among crypto companies expanding into AI and computing infrastructure, alongside the growth of AI agents in the industry.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments