The intrinsic knowledge of LLMs is very unreliable, I agree. But combined with f.e. web-search or hand picked context, they perform rather well. You can see the actual sources it read in the MCP tool call depending on the tool you use. (For me usually Kagi Assistant or Zed editor with Kagi MCP)
For me it is a great help to be able to search the web for relevant sources about public administration in a foreign language and still get summaries in the original query language.
Skimming a large amount of potential sources is also really practical.
The intrinsic knowledge of LLMs is very unreliable, I agree. But combined with f.e. web-search or hand picked context, they perform rather well. You can see the actual sources it read in the MCP tool call depending on the tool you use. (For me usually Kagi Assistant or Zed editor with Kagi MCP)
For me it is a great help to be able to search the web for relevant sources about public administration in a foreign language and still get summaries in the original query language.
Skimming a large amount of potential sources is also really practical.