Grok 4.3 is here, and people are talking less about the model itself and more about what it gets right that others don't. The voice mode doesn't quietly route you to a cheaper model like Haiku. Users report dictation accuracy hitting 98% with accents, compared to 90-95% on ChatGPT. The model also nails tone and formality, which matters if you're writing professional English as a non-native speaker. Responses feel more human, less like getting a lecture from an eager teaching assistant.

SuperGrok subscribers get something called "council of agents," where multiple agents with different system prompts work on your query in parallel. It's like getting second and third opinions automatically, without having to rephrase or re-ask.

But then there's the app. No MCP support means you can't connect Grok to your tools or data sources. Projects don't work in mobile apps. You can't search your chat history or rely on memory across sessions, and generated artifacts can't be added to projects. Voice mode doesn't even work inside projects. One commenter noted that once content moves to a project, it disappears from native apps entirely. That's not a minor UX quirk. That's a workflow killer.

Users in the Hacker News thread specifically called out the lack of MCP and connected apps as the reason they're holding off on subscribing. Anthropic, OpenAI, Google, and Microsoft are all building out context management and integration protocols. xAI seems to be betting that standalone model quality will win over ecosystem connectivity. Maybe it will for some users. But for anyone doing real work that requires connecting an LLM to actual tools and data, Grok 4.3 is a great model trapped in a mediocre app.