In this interview, we talk to Lucas Matheus Alves da Silva about RecomendeMe, a platform designed to counter algorithmic fatigue by prioritizing human curation. The project connects users through shared tastes in movies, books, and music, fostering meaningful discovery beyond engagement metrics.
What does RecomendeMe do? And why is now the time for it to exist?
RecomendeMe is a human-first recommendation platform built around a simple idea: cultural discovery works better when it comes from people, not from opaque ranking systems. Instead of passively consuming algorithmic feeds, users actively share recommendations for movies, books, music, and other cultural works—always anchored in personal context, intention, and lived experience.
The platform reframes discovery as a social and cultural exchange rather than a performance optimized for clicks. A recommendation on RecomendeMe isn’t surfaced because it drives engagement, but because someone genuinely found value in it and chose to pass it on. This restores meaning to the act of recommending, turning it into a form of trust rather than a signal in a data pipeline.
The timing matters. Algorithmic fatigue has reached a tipping point: users feel overwhelmed by infinite feeds, repetitive suggestions, and systems that optimize for attention instead of relevance. At the same time, there’s a growing desire for smaller, community-driven spaces where context, taste, and human judgment matter again. RecomendeMe emerges precisely at this intersection—offering an alternative model of discovery that values depth over scale, and human perspective over automation.
What is your traction to date? How many people does RecomendeMe reach?
RecomendeMe currently reaches approximately 5,000–10,000 monthly users, driven almost entirely by organic discovery and social sharing. More important than raw reach, however, is how people use the platform. Users don’t just pass through a feed—they return repeatedly to explore recommendations, follow trusted curators, and save content for later discovery.
Sessions tend to be intentional rather than passive: people spend time reading the context behind recommendations, moving across different cultural categories, and using the platform as a reference point rather than a scrolling destination. A significant portion of users come back weekly, often treating RecomendeMe as a personal or communal “memory” of things worth watching, reading, or listening to—something closer to a cultural utility than a traditional social network.
Who does your project serve? What’s exciting about your users and customers?
RecomendeMe serves people who feel increasingly disconnected from algorithm-driven discovery and are looking for recommendations grounded in trust, taste, and human context. Its users are culturally curious individuals—readers, cinephiles, music listeners, students, and creators—who value depth over volume and prefer intentional discovery to infinite feeds.
What technologies were used in the making of RecomendeMe? And why did you choose ones most essential to your techstack?
RecomendeMe is built on a deliberately pragmatic technology stack, chosen to support long-term reliability, interpretability, and human-centered design rather than rapid experimentation at any cost. At its core, the platform uses established web technologies—PHP, Apache, and MySQL—to ensure stability, transparency, and ease of iteration. These tools allow the system to scale organically while remaining understandable and maintainable, both for developers and partners.
To model the richness of human taste and social connection, RecomendeMe integrates Neo4j for handling relationship graphs. Recommendations are not treated as isolated data points, but as part of a broader network linking people, cultural works, contexts, and shared affinities. A graph database is essential for capturing these nuanced connections without flattening them into simplistic ranking scores.
On top of this structural layer, the platform incorporates AI augmentation rather than AI replacement. GPT-based models are used to interpret and synthesize natural language context—such as why a recommendation matters or how it relates emotionally or culturally—while R supports exploratory analysis and pattern discovery across the dataset. This hybrid approach allows RecomendeMe to combine qualitative human expression with quantitative insight, without collapsing one into the other..
What is traction to date for RecomendeMe? Around the web, who’s been noticing?
RecomendeMe has been identified as a notable signal by Trend Hunter, and its presence has extended well beyond a single platform or audience. The project has sparked organic conversations across social networks such as X and Threads, but also in smaller, more intentional spaces—local tech communities, university circles, cultural collectives, and informal curator networks.
What stands out is where these conversations happen. RecomendeMe is often mentioned in discussions about algorithmic fatigue, long-tail discovery, and the search for alternatives to engagement-driven feeds. Users reference it not as a trend, but as a practical example of a different approach to recommendation—one rooted in trust, context, and human judgment.
These signals appear across multiple “corners” of the internet: from independent creators sharing curated lists, to students and researchers discussing cultural data, to local cultural organizers exploring new ways to surface meaningful content. Rather than spreading through mass amplification, RecomendeMe circulates through relevance—traveling via communities that are actively questioning how discovery works today.
RecomendeMe scored a 56 proof of usefulness score (proofofusefulness.com/recomendeme-report) - how do you feel about that? Needs reassessed or just right?
A score of 56 feels fair—and useful—in the best sense of the word. RecomendeMe was never built to optimize for scores or short-term signals, but to test whether a human-first approach to discovery can sustain real use and trust over time. In that sense, the score reflects where the project genuinely is today: already useful, clearly adopted, but still early in its evolution.
What’s encouraging is that the score aligns with observable behavior. Recurring users, intentional engagement, and organic conversationsrather than hype or artificial growth tactics. At the same time, it highlights areas where RecomendeMe can become more explicit and measurable about its impact, especially as the platform expands its AI-augmented and institutional use cases.
So it doesn’t feel like something that needs to be “fixed,” but rather reassessed over time. As more usage patterns, partnerships, and human-centered data layers mature, the usefulness becomes easier to quantify. The score feels less like a verdict and more like a snapshot—accurate for this moment, and expected to change as the system deepens.
What excites you about this RecomendeMe's potential usefulness?
What excites me most is the opportunity to rethink how AI participates in cultural discovery. Instead of training systems primarily on engagement signals—clicks, watch time, or virality—RecomendeMe is grounded in real human recommendations, where context, intention, and trust are explicit rather than inferred.
This creates space for more human-centered AI systems: models that learn not just what people consume, but why they value something, when it makes sense, and for whom it resonates. By anchoring LLMs in human-written recommendations and social context, RecomendeMe explores how AI can augment discovery without replacing human judgment.
The long-term potential is an AI layer that feels less extractive and more assistive—one that supports thoughtful exploration, preserves cultural nuance, and makes discovery feel transparent and meaningful. That shift, from optimization to understanding, is what makes this project genuinely exciting to me.
Walk us through your most concrete evidence of usefulness. Not vanity metrics or projections - what's the one data point that proves people genuinely need what you've built?
The clearest proof of usefulness is repeat, intentional use without algorithmic pressure. A significant portion of RecomendeMe users return regularly—not because they’re pulled back by notifications or infinite feeds, but because the platform has become a reference point for cultural decisions. People come back when they want to decide what to watch, read, or listen to, not just to scroll.
One specific signal stands out: users actively save, revisit, and share recommendations outside the platform—sending links in private messages, group chats, classrooms, and local cultural circles. That behavior only happens when content carries trust and lasting value. There’s no incentive mechanism pushing it; it’s entirely voluntary.
In other words, RecomendeMe is used like a tool, not a timeline. When people return to something unprompted, treat it as a memory or a reference, and recommend it to others without being asked, that’s strong evidence they genuinely need it.
How do you measure genuine user adoption versus "tourists" who sign up but never return? What's your retention story?
We distinguish genuine adoption from casual sign-ups by looking at repeat, intentional actions, not surface-level activity. “Tourists” typically register, browse briefly, and never leave a trace of intent. Adopted users, by contrast, return to the platform to do something specific: save recommendations, follow trusted curators, revisit past content, or share links externally.
Retention on RecomendeMe isn’t driven by notifications or algorithmic nudges. Instead, we track whether users come back unprompted, often around real decision moments—choosing a film for a screening, assembling a reading list, or exploring a new genre. These users show patterns of spaced, recurring visits rather than daily compulsive use, which aligns with the platform’s role as a cultural utility rather than a feed.
If we re-score your project in 12 months, which criterion will show the biggest improvement, and what are you doing right now to make that happen?
The biggest improvement will likely appear in depth of use and measurable usefulness, rather than raw reach. RecomendeMe is already used intentionally, but over the next 12 months that usefulness will become easier to observe and quantify as more people rely on it in real contexts.
Right now, the focus is on strengthening repeat-use behaviors: improving saving and revisiting flows, making recommendations easier to reference over time, and deepening the contextual layer around each recommendation. In parallel, we’re formalizing use cases with universities, cultural groups, and local organizations—places where discovery has a clear purpose and decisions are made collectively.
We’re also refining how we surface human context through AI augmentation, so recommendations remain interpretable and transparent rather than optimized for engagement. These steps are already underway, and they directly reinforce the behaviors that matter most for usefulness: trust, return visits, and long-term relevance.
How Did You Hear About HackerNoon? Share With Us About Your Experience With HackerNoon.
I’ve been following HackerNoon since the early days, around 2023, back when it felt like a corner of the internet where builders, engineers, and curious minds actually talked shop. Over the years, it became one of the few places where technical depth, independent thinking, and long-form reflection still mattered.
Beyond reading, I’ve also been an active contributor—writing regularly about technology, culture, and how systems shape human behavior. What I value most about HackerNoon is the community itself: people who are more interested in understanding how things work than in chasing trends. It’s one of the rare platforms where thoughtful projects and unconventional ideas can still find an audience.
Given your current organic reach of 5k-10k users, what specific viral mechanism or user behavior has been the primary driver of this growth without paid marketing?
The primary driver of RecomendeMe’s organic growth has been contextual sharing rooted in community engagement, not traditional virality. Users don’t share content to maximize reach; they share it to invite others into a cultural context. Recommendations are passed along in group chats, classrooms, reading groups, cineclubs, and local tech communities—often tied to a specific moment, discussion, or event.
Offline interaction plays a key role. RecomendeMe is frequently used around cultural hubs and small-scale events—film screenings, university discussions, book clubs, and local meetups—where people collectively decide what to watch, read, or listen to. In these settings, the platform becomes a shared reference, and links naturally circulate before and after the event.
This loop—human recommendation → community use → contextual sharing → return visits—has sustained growth without paid marketing. Because the platform is useful in real social situations, it spreads through relevance rather than amplification. That behavior scales slowly, but it compounds, which is why the audience grows organically while maintaining high trust and engagement.
As you scale, how do you plan to maintain the quality of "human" recommendations without letting the noise of a larger user base dilute the trust factor?
As RecomendeMe scales, the goal isn’t to maximize volume, but to preserve signal. Trust in human recommendations erodes quickly when systems reward noise, so growth is being approached deliberately rather than aggressively.
Quality is maintained through context, not popularity. Recommendations are anchored in explanations, intent, and relational signals—who is recommending, to whom, and in what context—rather than raw engagement. As the user base grows, this allows relevance to remain local, situational, and interpretable instead of flattening into global rankings.
AI plays a supporting role, not a curatorial one. Models are used to help surface clarity, reduce redundancy, and highlight meaningful patterns, but never to replace human judgment or optimize for attention. This keeps the recommendation layer transparent and auditable.
Most importantly, scaling happens through communities, not crowds. By growing via cultural hubs, universities, and small groups with shared context, trust can remain intact even as the network expands. If quality ever degrades, growth is not considered a success—and the system is designed to slow down rather than dilute what makes it useful in the first place.
If you want, I can:
Your tech stack combines Neo4j (graphs) with GPT. How do you technically balance the "hallucination" risks of LLMs with the structured truth of your graph database to ensure useful recommendations?
We treat the graph as the source of truth and GPT as a language/interface layer, not a decision-maker.
- Graph-grounded retrieval first: Every recommendation response starts by querying Neo4j for verified entities and relationships (users, items, tags, communities, co-recommendation paths, recency, trust signals). GPT only sees retrieved facts, not the whole world.
- Constrained generation: The model is instructed to only reference items returned by the graph query. If something isn’t in the retrieved set, it must say “not enough data” rather than inventing.
- Structured output + validation: GPT outputs a structured payload (e.g., candidate IDs + reasons + cited relationship paths). We validate that each cited node/edge exists in Neo4j before showing it. If validation fails, we drop or regenerate the response with tighter constraints.
- Human-context synthesis, not fact invention: GPT’s job is to summarize why something is recommended (the human-written context, overlapping tastes, community signals), not to claim external facts about a movie/book/artist. Anything like “release year,” “awards,” etc. is either pulled from a trusted metadata source or omitted.
- Confidence + fallback behavior: When the graph signal is weak (sparse nodes, cold start), the system switches to safer modes: ask a clarifying question, show a diverse shortlist, or recommend “trending within your communities” without strong causal claims.
