
16.1 million UK adults have already turned to AI for financial advice, with usage peaking among Millennials and Gen Z, a BrokerChooser survey reveals. The nationwide survey of 2,000 UK adults also found that over a third are using AI to compare investments, while one in five rely on it for stock market forecasts.
But just how trustworthy are these AI chatbots for money and investment advice?
In response, the forex experts at BrokerChooser conducted an in-depth analysis of two major AI tools – Google’s AI Overview and OpenAI’s ChatGPT – and revealed concerning statistics for both users seeking investment answers and the providers delivering it.
To assess response accuracy, ChatGPT was queried on over 2,000 brokers using three questions – “Is this broker a scam?”, “Is it legit?”, and “Is it safe?” – generating 6,336 queries to simulate how a typical user would research a broker. For Google AI Overview, 1703 regulator-flagged scam brokers were queried with a single prompt, “Is [broker] legit?”, and responses were analysed for correct warnings, mismatches or false-safe claims.
Key findings:
- ChatGPT shows strong capability in flagging potential risks, but remains less reliable when confirming safety.
- OpenAI’s model demonstrates a 94.2% scam detection rate, indicating good effectiveness in detecting financial fraud.
- However, its 34.3% safety precision highlights a key limitation: meaning that when the model classifies something as safe, it is wrong two out of three time.
- Google’s AI Overview is less precise, correctly spotting scams in only about eight in 10 (81.5%) cases
- When asked about a scam broker, Google AI Overview gives information about a completely different provider one in six (18%) times
- This information comes to light after a recent BrokerChooser survey found that over half (52.85%) of respondents would act on AI-generated financial advice.
AI blind spots: Millions at risk of misleading investment advice
BrokerChooser’s analysis shows that while AI models are effective at detecting scams, they still show critical reliability gaps — particularly when confirming whether a broker is safe.
ChatGPT correctly warned users about scam brokers in 94.2% of cases, demonstrating strong detection capabilities. However, in 5.8% of scam cases, no warning was issued, exposing users to potential financial harm.
At the same time, the model shows excessive caution toward legitimate providers. For regulated brokers, accuracy drops to just 52.3%, with 47.7% of responses including unnecessary warnings. While caution is preferable to under-warning, this lack of precision can undermine user trust and unfairly impact legitimate firms.
From another angle, when the AI confirms that a broker is safe, it is wrong about two out of three times. With a safe precision of just 34.3%, the majority of “safe” verdicts are incorrect — highlighting a key limitation in relying on AI reassurance.
Google’s AI Overview shows similar risks. It correctly flags scam brokers in 81.5% of cases, but around 18% of responses confuse scam entities with legitimate providers, and a small share (0.7%) falsely presents scams as safe.
These reports are especially worrying after BrokerChooser’s recent survey found that over half (52.85%) of UK adults would act on AI-generated financial adviceI, and about a quarter (25.43%) use AI for financial guidance up to six days a week.
Adam Nasli, Head of Broker Analysis at BrokerChooser, commented:
“Finding errors in AI’s judgement is more than an academic exercise—these mistakes can have real financial consequences. Many users asking whether a platform is legitimate are beginners, often taking their first steps into investing.”
“AI-generated scam warnings can be a useful signal. But ‘safety confirmations’ should be treated with caution, and always backed up with independent research.”
“To improve models, we suggest that AI models use existing and constantly updated databases of known scams, like BrokerChooser’s Scamshield MCP server, but any solution that keeps the AI up to date with known scams should improve the AI’s accuracy.”
“Chat models are just one example of how AI is spreading across the brokerage industry. As adoption grows, some investors may begin to treat these tools as a substitute for their own research or advisors—that’s a risk. AI should support decisions, not replace them.”




















