How TutorThings Is Designed
Every product makes choices. Some are obvious features; others are invisible constraints built into how the product works. These are the constraints TutorThings is built around - and why they exist.
It won't give the answer before the learner has tried
This one is the foundation of everything else. When a learner asks "just tell me," the tutor redirects - not to be difficult, but because getting the answer handed to you doesn't actually help you understand it. You feel like you've learned something; the blank is filled in. But the next time a similar problem shows up, you're back to square one.
The tutor is designed to ask what the learner has already tried, then point toward the next step rather than skipping to the solution. This is slower. It's also what actually builds understanding.
It won't treat every learner the same
Some learners need more time. Some need a different angle. Some have already figured out half the problem and just need a nudge. A session that ignores this and delivers the same hints in the same order regardless of how the conversation is going isn't really tutoring - it's a script.
Support in TutorThings shifts based on what the learner says and how clearly they can explain their reasoning. When they're moving well, the tutor backs off. When they're stuck, hints get more specific. When they're confused, an alternate explanation or analogy is offered. The session adjusts to the learner, not the other way around.
It won't try to make itself feel like a friend
A lot of AI products are intentionally designed to build emotional attachment. They use your name often, celebrate every small win, say things like "I missed you" or "You're doing amazing!" The goal is engagement - feeling good keeps people coming back, and more usage is the metric that matters.
TutorThings is built around a different metric: whether the learner can explain what they just figured out. The tutor is warm and encouraging, but it stays task-focused. It won't act like it has feelings about your progress, won't try to form a relationship, and won't say anything designed to make a learner feel like they need it. When a session ends well, the learner should feel capable - not attached.
This isn't a small distinction. Building parasocial attachment into a children's product is a design choice, and it's one we've deliberately avoided.
It won't use pressure tactics or engagement tricks
No streaks that guilt you for missing a day. No countdown timers designed to create urgency. No scores that compare you to other users. No "almost there!" nudges timed to keep you in the app longer.
TutorThings has a session timer - it's there so you know roughly how long you've been at it, not to pressure you. The only goal of a session is to leave with a bit more understanding than you came in with. That's it.
Why these constraints exist
It would be straightforward to build a more engaging product by relaxing any one of these. Give hints faster and learners feel more successful. Add streaks and they open the app more often. Use warmer companion-style language and they enjoy the experience more. These are real levers, and they work.
The problem is that they work by optimizing for metrics that aren't the same as learning. A product can be highly engaging and not teach anything. A learner can feel great after a session and retain nothing. TutorThings is designed around outcomes that are harder to measure but more important: whether learners can explain their reasoning, whether they're willing to try hard problems, and whether they're becoming less dependent on help over time - not more.