Have something to say?

Tell us how we could make the product more useful to you.

Custom Package Visibility

Hi, I would like to submit feedback regarding Custom Package visibility. Right now, whenever a Sales Agent attempts to add a new account to his/her dashboard, they can see all of the available custom plans - even those that are designated as β€˜hidden’. It would be helpful if we could designate certain plans to ONLY sales agents, and other plans to the general public. Secondly, it would also be useful if we could toggle off any client-facing visibility to the billing section. When our agency is fully responsible for managing customer reviews and direct billing, it becomes a challenge for/when they see the available plans (with pricing). Hiding the custom plans only solves half the problem, because now those customers who have signed up directly can no longer see the other plans to upgrade to. https://www.loom.com/share/5dcb9289baea4aac85642747214efe86

Review Pulse 360 about 19 hours ago

πŸ“₯ Feedback

Further Search AI bugs/issues

➑️ Sometimes it happens that instead of answering the question the AI will reply with something like this: Would you like me to find and list the one clearly best-rated garage door installer in Tampere, or would you prefer a short top-3 list (company + rating + source and contact information)? I'll tell you the search criteria (for example, Google/Fonecta/Urakkamaailma ratings) before searching, if you'd like. This is probably just a prompt issue, maybe embedded in the prompt should be instructions to answer directly and to not ask followup questions? ➑️ What causes this outcome? Should this not be the raw output even for non-detections? Happens mostly with Gemini and ChatGPT, but also with Perplexity even though much more rare. ➑️ Still plenty of prompts with mixed English and profile language. Or are the prompt templates in English and we see the translations just in the UI?

Severi 1 day ago

1

πŸ› Bug Reports

Feedback on AI review flow

Hi team, First, the AI-generated follow-up questions are excellent. The contextual analysis by sector is very strong and becomes even better with proper briefing. However, I have an issue with the current review flow. When a customer gives a high rating (e.g. 5 stars), the system still allows follow-up answers that include clearly negative wording (e.g. β€œwaiting time was disappointing”). The AI then combines positive and negative feedback and suggests copying this directly into a public Google review. This creates a problem: A satisfied customer can unintentionally publish a mixed or negative review on public platforms. In my view, this is not ideal for two reasons: It distorts the initial high rating (NPS-style inconsistency). It increases the risk of users posting unintended negative public reviews. What I would suggest instead: The AI should detect negative sentiment keywords in follow-up answers. If negative sentiment is detected after a high rating, the flow should switch from β€œpublic review generation” to an internal feedback form (private feedback / support loop). Ideally, there should be a final AI β€œreview check” step before anything is sent to Google, to validate consistency between rating and text. In short, the system is very powerful, but it needs a stronger gating logic before publishing public reviews. Happy to discuss if needed. Best regards, Rudy Mence

Rudy MencΓ© 9 days ago

πŸ“₯ Feedback