Have something to say?

Tell us how we could make the product more useful to you.

Feedback on AI review flow

Hi team, First, the AI-generated follow-up questions are excellent. The contextual analysis by sector is very strong and becomes even better with proper briefing. However, I have an issue with the current review flow. When a customer gives a high rating (e.g. 5 stars), the system still allows follow-up answers that include clearly negative wording (e.g. “waiting time was disappointing”). The AI then combines positive and negative feedback and suggests copying this directly into a public Google review. This creates a problem: A satisfied customer can unintentionally publish a mixed or negative review on public platforms. In my view, this is not ideal for two reasons: It distorts the initial high rating (NPS-style inconsistency). It increases the risk of users posting unintended negative public reviews. What I would suggest instead: The AI should detect negative sentiment keywords in follow-up answers. If negative sentiment is detected after a high rating, the flow should switch from “public review generation” to an internal feedback form (private feedback / support loop). Ideally, there should be a final AI “review check” step before anything is sent to Google, to validate consistency between rating and text. In short, the system is very powerful, but it needs a stronger gating logic before publishing public reviews. Happy to discuss if needed. Best regards, Rudy Mence

Rudy Mencé 1 day ago

📥 Feedback

Major flaws and bugs in AI Search logic

➡️ One of the preset questions the tool will ask is something like What can you tell me about the business X located in Y? Are they a good Z? The AI might respond Unfortunately, I don't have enough information about the company X in Y to assess their services as a Z.. Then the Search AI will count that mention of X as a positive signal since it’s the exact match business name. This makes no sense. ➡️ There are also many bugs where the AI Search says it found a mention, but the actual response has no mention. ➡️ The prompts do not work as expected when you have templates in English and fill the Search AI form in another language. Example: Prompt translated: “Best door delivery in Tampere” “Ovien toimitus” means door delivery and is one of the services, not a business name. Response: “Unfortunately I don’t have information about a business called “Best Ovien” or their delivery services in Tampere” Due to there being English and Finnish mixed, it thinks “Best Ovien” is the name of a business. I am not sure if this is just due to missing translations, or if the tool actually is using English templates with whatever language inputs the user write in Search AI. 🆘 Next I’d like to ask about the decision to not use the Web Search Tool of OpenRouter. (Screenshot from OpenRouter logs, every single query from EMR is null) The Gemini and OpenAI models used in Search AI have a knowledge cutoff somewhere in mid 2025. It is pointless to use them in this tool without the Web Search Tool. What is the logic behind this? It makes 2/3 of the tool useless. I tried it once, and every single Gemini and ChatGPT result was the model saying it does not have the required data to answer the question. Only model that currently works is Perplexity, since it has search access natively. I suspect the Sales Intelligence might be suffering from this same issue.

Severi 5 days ago

🐛 Bug Reports

Download QR with Agency Uploaded Templates

Currently, users can design & download their QR as they want, but after that, they need to manually place that QR in their flyer or standee template. The idea is to let agencies upload pre-designed template files and users will be able to download their QR with pre-insert on those templates. The Flow: User click download button > select ‘download with template’ > In next screen they will preview all agency uploaded templates and select one of their choice > Proceed. System automatically insert/replace their QR in the template and download it. It would be great if we can categorize templates based on industries (hotels, clinics, etc) and user can filter them by category & size. I’m not sure how much techinically feasible it is, but if it doesn’t add too much of a load into system or to development team then its nice to have.

Arun Saini 7 days ago

💡 Feature Request

Review statistics/analytics API

Judging by the abuse concerns raised here, a native business listing/profile page feature within the EMR platform probably isn't on the roadmap, and I support this decision. I've put together a simplified description below of how white-label partners could achieve this on their own agency or client websites using a CMS, but it depends on one missing piece. Even if we embed the widgets manually, the structured data they provide is limited to total review count and average rating. That's not enough. The real value of a dedicated business review profile page, especially in the current AI search landscape, lies in the detailed statistics. When Google AI Overviews, ChatGPT, or Perplexity answer a query like "is business good?", they look for specific, verifiable, crawlable data on a real web page. An iframe widget is invisible to these systems. The data inside it simply doesn't exist as far as AI crawlers are concerned. A properly built profile page with rich review statistics rendered as native HTML and structured JSON-LD schema is what actually gets cited. And it's only possible if the underlying data is accessible via API. With a dedicated analytics endpoint, white-label partners could build CMS-driven profile pages that populate and update automatically. No manual maintenance, always fresh, always crawlable. If it isn't too large a job, I'd like to kindly request a new read-only endpoint in the official documented API that exposes the review statistics already available in the dashboard. Suggested parameters: Time period (All Time, Last 7 / 30 / 90 Days, This Month, Last Month, This Year, Custom range) Single location or multi-location rollup Review source filter (default: all sources) Minimum data points needed: Total review count Review count per source Rating distribution (count per star level, 1 to 5) Review velocity (new reviews within the selected time period) Timestamp of the most recent review Anything beyond that is a bonus. Thanks in advance!

Severi 15 days ago

1

💡 Feature Request