Tagged "release"
- Red Hat Launches AI Enterprise for Hybrid AI Deployments
- Qwen3.5 Thinking Mode Can Be Disabled for Production Inference Optimization
- Qwen3.5 Series Releases Comprehensive Model Lineup Across All Tiers
- Qwen3.5-35B-A3B Emerges as Game-Changer for Agentic Coding Tasks
- Kioxia Sampling UFS 5.0 Embedded Flash Memory for Next-Generation Mobile Applications
- Elastic Introduces Best-in-Class Embedding Models for High Performance Semantic Search
- Making Wolfram Technology Available as Foundation Tool for LLM Systems
- Elastic Introduces Best-in-Class Embedding Models for High Performance Semantic Search
- Ouro 2.6B Thinking Model GGUFs Released with Q8_0 and Q4_K_M Quantization
- Ollama 0.17 Released With Improved OpenClaw Onboarding
- Google Open-Sources NPU IP, Synaptics Implements It for Hardware Acceleration
- DietPi Released a New Version v10.1
- Asus ExpertBook B3 G2 with 50 TOPS AI Sets New Enterprise Standard
- Vellium v0.3.5: Major Writing Mode Overhaul and Native KoboldCpp Support
- [Release] Ouro-2.6B-Thinking: ByteDance's Recurrent Model Now Runnable Locally
- Claude Code Open – AI Coding Platform with Web IDE and Agents
- LayerScale Launches Inference Engine Faster Than vLLM, SGLang, and TRT-LLM
- Kitten TTS V0.8 Released: State-of-the-Art Super-Tiny Text-to-Speech Model Under 25MB
- Aegis.rs: Open Source Rust-Based LLM Security Proxy Released
- Tailscale Releases New Tool to Prevent Sensitive Data Leakage to Cloud AI Services
- Sarvam AI Launches Edge Model to Challenge Major AI Players with Local-First Approach
- Alibaba's Qwen3.5-397B Achieves #3 Position in Open Weights Model Rankings
- OpenClaw Refactored in Go, Runs on $10 Hardware
- GLM-5 Technical Report: DSA Innovation Reduces Training and Inference Costs
- Cloudflare Releases Agents SDK v0.5.0 with Rust-Powered Infire Engine for Edge Inference
- AMD Announces Day 0 Support for Qwen 3.5 LLM on Instinct GPUs
- Meet Sarvam Edge: India's AI Model That Runs on Phones and Laptops With No Internet
- Qwen 3.5-397B-A17B Now Available for Local Inference with Aggressive Quantisation
- Cohere Releases Tiny Aya: Efficient 3.3B Multilingual Model for 70+ Languages
- ASUS Zenbook 14 Launches in India with AI-Capable Hardware, Starting at Rs 1,15,990
- Asus ExpertBook B3 G2 Laptop Features Ryzen AI 9 HX 470 CPU in 1.41kg Ultraportable Form Factor
- InitRunner: YAML-Based AI Agent Framework with RAG and Memory
- GPU-Accelerated DataFrame Library for Local Inference Workloads
- Alibaba Unveils Major AI Model Upgrade Ahead of DeepSeek Release
- WinClaw: Windows-Native AI Assistant with Office Automation
- GitHub Announces Support for Open Source AI Project Maintainers
- MiniMax M2.5: 230B Parameter MoE Model Coming to HuggingFace
- Ming-flash-omni-2.0: 100B MoE Omni-Modal Model Released
- ByteDance Releases Seedance 2.0 AI Development Platform
- Samsung's REAM: Alternative Model Compression Technique
- OpenClaw with vLLM Running for Free on AMD Developer Cloud
- Microsoft MarkItDown: Document Preprocessing Tool for LLMs
- Memio Launches AI-Powered Knowledge Hub for Android with Local Processing
- New Header-Only C++ Benchmark Tool for Predictive Models on Raw Binary Streams
- GLM-5 Released: 744B Parameter MoE Model Targeting Complex Tasks
- Nanbeige4.1-3B: A Small General Model that Reasons, Aligns, and Acts
- DeepSeek Launches Model Update with 1M Context Window
- Arm SME2 Technology Expands CPU Capabilities for On-Device AI