Bittensor’s TAO ripped higher on Thursday and topped in early European trading on Friday after Nvidia CEO Jensen Huang highlighted the project on the All-In podcast, pushing the token from $243.5 to $310.6 before it cooled to $298.1 by press time.
The move put one of crypto’s most closely watched AI-linked assets back in focus, not because Huang endorsed the token directly, but because he treated the underlying technical milestone as meaningful in a much bigger debate over open AI infrastructure.
The moment came when Chamath Palihapitiya pointed Huang to what he called a “pretty crazy technical accomplishment” inside “this crypto project called Bittensor.” He described a recent training run on Subnet 3 in which participants used distributed excess compute to train a Llama model “totally distributed” while still managing the process statefully.
Nvidia CEO Responds To Bittensor’s Accomplishment
Huang’s immediate reaction was brief but memorable: “Our modern version of Folding@home.”
That line mattered because it effectively reframed Bittensor’s latest milestone in language traditional tech audiences already understand. Folding@home was one of the most recognizable examples of decentralized volunteer computers; Huang’s comparison suggested he viewed Bittensor’s experiment less as crypto theater and more as a legitimate expression of distributed coordination.
In the context of TAO’s price action, traders appeared to read that as external validation from one of the most influential executives in AI hardware.
Huang then widened the discussion beyond Bittensor itself and into the structure of the AI market. “I believe we fundamentally need models as first-class products, proprietary products, as well as models as open source. These two things are not A or B, it’s A and B. There’s no question about it,” he said. He followed that with an even sharper distinction: “Models are a technology, not a product. Models are technology, not a service.”
He spent the next stretch explaining why that dual-track model matters. For general-purpose consumer use, Huang said most people will continue to prefer turnkey services rather than fine-tuning their own systems. “I would really, really love not to go fine-tune my own. I would really love to keep using ChatGPT. I love to use Claude. I love to use Gemini. I love to use X,” he said, arguing that this horizontal layer of AI products “is thriving” and “is going to be great.”
On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run “a pretty crazy technical accomplishment.”
One correction: it’s 72 billion parameters, not four. Trained permissionlessly across 70+ contributors… pic.twitter.com/BN0tWG66e8
— templar (@tplr_ai) March 19, 2026
But he drew a hard line when it came to industry-specific deployment, saying domain expertise “has to be captured in a way that they can control,” and that “it can only come from open models.”
That distinction goes to the heart of why Bittensor reacted so violently. While Huang didn’t make a token call, or presented Bittensor as the winner of open AI, he did endorse the coexistence of proprietary and open model ecosystems, while acknowledging that specialized industries will need more controllable, open foundations.
At press time, TAO traded at $297.0















Schreibe einen Kommentar