If you’ve ever tried recommending a course to a friend, you know how personal it is. Now imagine doing that at scale, for lakhs of students and their parents—with real academic futures on the line.
That’s exactly what Lohith Nagaraj, VP of Product at Narayana Education, is building:
An AI-driven decision engine that helps match students to the right course, at the right time—based on data, not gut.
We spoke to him as part of our Experiment Engine series, and here are a few ideas that left us thinking:
1. AI is only as good as your hypothesis
Lohith isn’t just throwing data at a model and hoping for magic. Every experiment starts with a clear hypothesis:
“Is a student’s academic performance a better predictor than their stated interest? What about parent goals?”
The team runs controlled tests by varying these inputs to see which combinations actually lead to better course fits—and, ultimately, outcomes.
📌 Takeaway: AI doesn’t replace human judgment—it scales it. But only if you ask the right questions first.
2. Parents are part of the journey (whether you like it or not)
In B2C edtech, your end user isn’t your only stakeholder. Parents make decisions. So Narayana experiments with different communication styles, tones, and even nudge timings—based on whether the app is being used by the student or the parent.
“A reminder that works for a 17-year-old doesn’t work for a 45-year-old. We had to test that the hard way.”
📌 Takeaway: Know your personas—and their power dynamics. Your “user” may not be your decision-maker.
3. Experiments can’t just chase engagement—they have to improve outcomes
At Narayana, engagement metrics like clicks and time spent are important—but they’re not the end goal. The real question is:
Did the student end up in a better-fit course? Did they stick with it? Did performance improve?
That means experiments run longer, require cleaner data, and tie back to actual learning outcomes.
📌 Takeaway: In high-stakes products, “conversion” means nothing unless it leads to success.
4. Even nudges need a test plan
Lohith’s team doesn’t launch nudges blindly. They test which moments matter, which tone works best, and how often is too often. The goal isn’t to “get clicks”—it’s to create confidence in decision-making.
“Sometimes not nudging is more powerful than sending five reminders.”
📌 Takeaway: Build a test culture not just for big features—but for the micro-decisions too.
5. You don’t need 100 experiments. You need 10 great ones.
What’s refreshing about Narayana’s approach is their focus. Instead of running dozens of random A/B tests, they focus on a few high-leverage experiments tied to real business outcomes.
“It’s better to deeply understand 3 hypotheses than to run 30 shallow tests.”
📌 Takeaway: Quality > quantity. Not every button color matters. Some decisions are worth obsessing over.
Final Thought:
At Plotline, we talk a lot about intelligent nudges, personalized journeys, and behavioral feedback loops. What Narayana is building is all of that—and more. It’s experimentation with purpose.
Because in education, you’re not optimizing for a click.
You’re optimizing for someone’s future.
✉️ Want to get more stories like this in your inbox?
Subscribe to The Experiment Engine – where we decode how top product teams test, learn, and ship smarter.
Share this post