Tether Launches BitNet LoRA Framework Across Platforms

  • Tether’s BitNet LoRA framework enables AI model training across smartphones, GPUs, and consumer devices.

  • The system reduces memory use and boosts performance, with up to 77.8% lower VRAM requirements.

  • Users can fine-tune models up to 13B parameters on mobile devices, expanding edge AI capabilities.

Tether announced a new AI framework through its QVAC Fabric platform, enabling cross-platform BitNet LoRA training on consumer devices. The update allows billion-parameter models to run on smartphones and GPUs. CEO Paolo Ardoino shared the development, highlighting reduced costs and broader access to AI tools.

Cross-Platform AI Training Expands Access

The QVAC Fabric update introduces cross-platform support for BitNet LoRA fine-tuning. This allows AI models to run across different hardware and operating systems.

Notably, the framework supports GPUs from AMD, Intel, and Apple, including mobile chipsets. It also uses Vulkan and Metal backends for compatibility.

According to Tether, this is the first time BitNet LoRA works across such a wide range of devices. As a result, users can train models on everyday hardware.

Performance Gains On Consumer Hardware

The system reduces memory and compute needs by combining BitNet and LoRA techniques. BitNet compresses model weights into simplified values, while LoRA limits trainable parameters.

Together, these methods lower hardware requirements significantly. For example, GPU inference runs two to eleven times faster than CPU on mobile devices.

Additionally, memory usage drops sharply compared to full-precision models. Benchmarks show up to 77.8% less VRAM use than comparable systems.

Tether also demonstrated fine-tuning on smartphones. Tests showed 125 million parameter models trained in minutes on devices like Samsung S25.

Mobile And Edge Devices Handle Larger Models

The framework enables larger models to run on edge devices. Tether reported successful fine-tuning of models up to 13 billion parameters on iPhone 16.

Moreover, the system supports mobile GPUs such as Adreno, Mali, and Apple Bionic. This expands AI development beyond specialized hardware.

According to Paolo Ardoino, AI development often depends on expensive infrastructure. He said this framework shifts capabilities toward local devices.

Tether added that the system reduces reliance on centralized platforms. It also allows users to train and process data directly on their devices.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments